doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
9169a1b0-23d8-4153-90a9-62797b50568d
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.3 Zero-Shot End-To-End Tod Evaluation | | V | | | | 1.5 | 33.8 | 23.1 | | B | | | | AICHUAN | | | | 2-13B-C | | | | HAT | | | | 33.0 | 45.7 | | | LL | | | | A | | | | MA2-7B-C | | | | HAT | | | | 16.7 | 24.9 | | | LL | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
23665ed0-0e58-4dfb-836b-1bbe34a0362d
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.3 Zero-Shot End-To-End Tod Evaluation | | | HAT | | | | 16.7 | 24.9 | | | LL | | | | A | | | | MA2-13B-C | | | | HAT | | | | 25.8 | 27.7 | | Compared to previous prompting approaches, by enabling both zero-shot DST and response generation (Hudeˇcek and Dušek, 2023), the superiority of the FnCTOD approach becomes more evident. Specifically, all open-source models evaluated using our approach outperform ChatGPT's results achieved by (Hudeˇcek and Dušek, 2023), except for LLAMA2-7B-CHAT. In addition, the results show that the fine-tuned model FNCTOD-LLAMA2- 13B retains its ability to generalize and generate informative responses in a zero-shot TOD setting.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4f094608-21bd-42f5-a3d1-551ef1028524
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.4 Ablation Studies Impact of different numbers of in-context examples Our initial investigation focuses on the influence of varying the number of in-context examples when conducting few-shot prompting with opensource models, which were not originally trained for function call generation. We assessed the performance of various models with different numbers of in-context examples, ranging from 0 to 5. We note that using more than five examples might surpass the context-window capacity (such as 4096 tokens) for some models. The findings are illustrated in Figure 5. The results indicate that the models perform significantly better when in-context examples are utilized compared to zero-shot prompting. Furthermore, there is a consistent performance improvement as the number of examples increases, across most domains and models. This underscores the crucial role of in-context examples when leveraging open-source models for DST through function calling, which is reasonable given that these models were not fine-tuned to equip with the capability to generate function calls in the required format solely according to the function specification within the system prompt. | Attr. | Hotel | Rest. | Taxi | Train | |-------------------|---------|---------|--------|---------| | ChatGPT (GPT-3.5) | | | | | | w/o decomp. | 59.64 | 32.24 | 61.39 | 74.87 | | w/ decomp. | 67.15 | 37.56 | 60.12 | 74.43 | | F | | | | | | N | | | | | | CTOD-LL |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
01bf71f5-0397-4186-ae54-d334fa1c7108
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.4 Ablation Studies | | | | | | N | | | | | | CTOD-LL | | | | | | A | | | | | | MA2-13B | | | | | | w/o decomp. | 34.77 | 32.02 | 56.63 | 65.4 | | w/ decomp. | 62.24 | 46.83 | 60.27 | 67.48 | Impact of function call decomposition In each dialogue turn, the model is required to first identify the appropriate function to call (function selection) and then generate the corresponding arguments for it (argument generation). We compare our two-step approach with a non-decomposed method, where all supported functions were directly included in the prompt, and the model was tasked with generating the entire function call with both function name and arguments, in one step. This comparison is conducted on ChatGPT and our fine-tuned FNCTOD- LLAMA2-13B, which supports zero-shot prompting. It's worth noting that the non-decomposed method is the default when using ChatGPT. The results in Table 4 demonstrate that this decomposition consistently leads to performance improvements, highlighting the efficacy of our strategy.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a5ec0a40-040b-4d1c-bc01-f5d11f1d3663
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.4 Ablation Studies compare our two-step approach with a non-decomposed method, where all supported functions were directly included in the prompt, and the model was tasked with generating the entire function call with both function name and arguments, in one step. This comparison is conducted on ChatGPT and our fine-tuned FNCTOD- LLAMA2-13B, which supports zero-shot prompting. It's worth noting that the non-decomposed method is the default when using ChatGPT. The results in Table 4 demonstrate that this decomposition consistently leads to performance improvements, highlighting the efficacy of our strategy. Impact of training data sizes Our results indicate that with as few as 200 samples per domain, totaling 7,200 dialogues across 36 domains, we were able to fine-tune a LLAMA2-13B-CHAT model to | #Data | Attr. | Hotel | Rest. | Taxi | Train | Avg. | |---------|---------|---------|---------|--------|---------|--------| | 100 | 59.61 | 44.4 | 54.33 | 67.02 | 54.33 | 55.94 | | 200 | 62.24 | | | | | | | 46.83 | 60.27 | 67.48 | 60.9 | 59.54 | | | | 300 | | | | | | | | 69.19 | | | | | | | | 43.68 | 57.06 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3d359ebf-fb1d-4f5b-bfdd-b0ea24ee030a
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.4 Ablation Studies | | | | | | | | 69.19 | | | | | | | | 43.68 | 57.06 | 64.98 | 57.6 | 58.5 | | | | 400 | 60.8 | 43.21 | 57.39 | 65.7 | 53.78 | 56.18 | match the zero-shot DST performance of ChatGPT. We explored the model's performance with varying numbers of samples, ranging from 100 to 400 per domain. The results, depicted in Table 5, show that optimal performance is achieved with 200 samples per domain. We speculate that beyond this point, the number of training samples leads to the model over-fitting to domains in the training data and, therefore, less effective at zero-shot generalization.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4cfcc16b-2ef0-4f92-ba89-c87c41aec710
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 6 Conclusion We introduce a new approach to tackle the challenging task of zero-shot DST with LLMs, enabling them to handle both general conversations and task-oriented dialogues in diverse domains without the need for additional data collection. Our experimental results on MultiWOZ demonstrate that our approach not only delivers exceptional performance in advanced ChatGPT models (setting a new benchmark) but also across a range of moderately sized open-source LLMs. Furthermore, we demonstrate that we can fine-tune the open-source model LLAMA-2-13B-CHAT using only 7,200 training samples from 36 diverse domains, resulting in FNCTOD-LLAMA2-13B, which achieves function calling, zero-shot DST performance comparable to ChatGPT.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
44356270-a58b-45b0-8572-e149d8a4495c
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 7 Limitations In this work, we propose a novel approach to solve zero-shot DST with LLMs. Our approach achieves outstanding performance with various LLMs, both modestly-sized open-source and advanced proprietary LLMs, setting the new state-of-the-art. However, it is important to recognize that the current accuracy may still not yet be high enough for the practical deployment of such zero-shot systems. We anticipate that with further advancements in the NLU and NLG capabilities of base LLMs, our approach could achieve even greater performance levels. In addition, while our approach can handle both the DST and response generation task in TOD, it is worth noting that due to the current lack of a more realistic evaluation setting for response generation in TOD, we used delexicalized responses for evaluation as this is widely used in prior work. This setting and associated metrics have some known shortfalls in terms of being able to game-the-metrics with nonnatural responses as well as presenting a data mismatch with how LLMs are trained. In the era of LLMs, we advocate for the development of more realistic evaluation approaches for full-natural-language-response generation in TOD.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
12e11612-5a4b-4c18-bd6a-a0ad481c267d
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A Appendix A.1 Evaluation Details We evaluated two versions of ChatGPT and six leading chat/instruction-tuned LLMs representing varying sizes and instruction-following and conversational capabilities. The six evaluated opensource models include: ZEPHYR-7B-BETA (Tunstall et al., 2023) is an instruction-tuned version of Mistral-7B (Jiang et al., 2023), which is the leading model among its size on the AlpacaE- val leaderboard (Li et al., 2023b). VICUNA-7B- V1.5 and VICUNA-13B-V1.5 (Chiang et al., 2023) are LLAMA-2 models fine-tuned on user conversations with ChatGPT. LLAMA2-7B-CHAT and LLAMA2-13B-CHAT are chat-tuned versions of LLAMA2 models with varying sizes (Touvron et al., 2023). BAICHUAN2-13B-CHAT is also a LLAMA2-13B model further fine-tuned on extensive corpus (Baichuan, 2023). we utilized their checkpoints available on Huggingface3. The specific paths for these models are detailed in Table 8. For inference, the temperature was fixed as 0.3, top_p as 0.2, and max_tokens as 128. For each test case, we conducted a single inference run. All inferences were executed on a cluster equipped with eight 48G NVIDIA RTX A6000 GPUs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
62ec9778-47d7-40a3-bba2-8c56c7aa5bcd
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.2 Training Details Training Data For constructing our fine-tuning dataset, we selected five high-quality, multi-turn TOD corpora, excluding MultiWOZ, as detailed in Table 9. Each dataset encompasses one or multiple domains. We excluded several domains with lowquality annotations, retaining a total of 36 domains. For our fine-tuning, we exclusively sampled data from the training sets of these datasets to constitute our training data. Hyperparameters We fine-tuned the LLaMA-2- 13b-Chat checkpoint from Hugginface.4. We utilize Low Rank Approximation (LoRA) (Hu et al., 2021) and limited our fine-tuning to the parameters in the q_proj and v_proj modules. Further details about the fine-tuning hyperparameters can be found in Table 6. The fine-tuning was conducted on 4 A6000 48GB GPUs. | Hyperparameter | Values | |-------------------------|----------------| | batch size | | | 8 | | | epochs | | | 1 | | | learning rate | | | 0 | . | | learning rate scheduler | cosine | | weight decay | | | 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
052191ff-3b3f-4280-be75-ff4027ec4f0f
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.2 Training Details | | | 0 | . | | learning rate scheduler | cosine | | weight decay | | | 0 | . | | cutoff_len | | | 4096 | | | lora_r | | | 16 | | | lora_alpha | | | 16 | | | lora_dropout | | | 0 | . | | lora_target_modules | q_proj, v_proj | | Model | Accuracy | |----------------
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fdbf425d-4b9c-415c-8348-0a9a802f1789
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.2 Training Details | | lora_dropout | | | 0 | . | | lora_target_modules | q_proj, v_proj | | Model | Accuracy | |-------------------|------------| | ChatGPT (GPT-3.5) | 95.54 | | ChatGPT (GPT-4) | 88.62 | | F | | | N | | | CTOD-LL | | | A | | | MA2-13B | 91.68 | | Z | | | EPHYR | | | -7B-B | | | ETA | | | 92.77 | | | V |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6e33b253-8a61-4e2b-bccb-1ae80aded97d
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.2 Training Details | | -7B-B | | | ETA | | | 92.77 | | | V | | | ICUNA | | | -7B- | | | V | | | 1.5 | 94.75 | | V | | | ICUNA | | | -13B- | | | V | | | 1.5 | 91.82 | | B | | | AICHUAN | | | 2-13B-C | | | HAT
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ce224f60-402c-4d3a-9117-3900f6165d13
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.2 Training Details .5 | 91.82 | | B | | | AICHUAN | | | 2-13B-C | | | HAT | | | 92.50 | | | LL | | | A | | | MA2-7B-C | | | HAT | | | 91.90 | | | LL | | | A | | | MA2-13B-C | | | HAT | | | 89.34 | | | LL |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ed3c526d-a6b1-4348-a4a9-221aec68c0bd
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.2 Training Details | | | MA2-13B-C | | | HAT | | | 89.34 | | | LL | | | A | | | MA2-70B-C | | | HAT | | | 90.25 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
64c6b310-2052-49a2-b735-29a81c59fdb8
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.3 More Results A.3.1 Function Selection Accuracy In our approach, we divide the function call generation process into two steps: (1) Function/domain selection: The model selects a function/domain to call from the list of all supported functions by generating the function name. (2) Argument generation: The model generates the arguments for the selected function. We present the results using the predicted domains instead of oracle domains in Table 2. Additionally, we provide the accuracy of the function/domain prediction in Table 7. It is evident that function/domain selection is a straightforward task for all the evaluated models. A.3.2 Ablation Studies We conduct more investigation focused on effective prompt strategies, including the effective dialogue prompt format and methods for describing supported functions. Impact of the unified dialogue prompt We initiated our analysis into effective prompt strategies for in-context prompting using open-source models. In our approach, we seamlessly integrated function calls into the assistant's output, incorporating them within the conversation context rather than treating them as a separate task. To evaluate its impact, we compared scenarios where function calls were included or omitted from the conversation context. | Model | Model versioning/path | |------------------------------------------------------------------------------|-------------------------| | GPT-3.5-Turbo | | | gpt-3.5-turbo-1106 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aee6609a-f750-44ea-8c99-d59d062bd927
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.3 More Results | | | gpt-3.5-turbo-1106 | | | GPT-4 | | | gpt-4-1106-preview | | | Zephyr-7B-Beta | | | https://huggingface.co/HuggingFaceH4/zephyr-7b-beta |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
21e1682a-ee30-49ec-b009-40b17da52236
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.3 More Results | | | https://huggingface.co/HuggingFaceH4/zephyr-7b-beta | | | Vicuna-7B-v1.5 | | | https://huggingface.co/lmsys/vicuna-7b-v1.5 | | | Vicuna-13B-v1.5 | | | https://huggingface.co/lmsys/vicuna-13b-v1.5 | | | Baichuan2-13B-Chat
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d15a3e4e-d036-40ad-a415-133754eb9b5b
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.3 More Results | | | https://huggingface.co/lmsys/vicuna-13b-v1.5 | | | Baichuan2-13B-Chat | | | https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat | | | LLaMA2-7B-Chat | | | https://huggingface.co/meta-llama/Llama-2-7b-chat-hf | | | LLaMA2-13B-Chat
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9de8769c-a5fa-437f-b693-e5088c8bf9f7
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.3 More Results ://huggingface.co/meta-llama/Llama-2-7b-chat-hf | | | LLaMA2-13B-Chat | | | https://huggingface.co/meta-llama/Llama-2-13b-chat-hf | | | Dataset | Domains | | Schema-Guided ( | Rastogi et al. | | Services_1, Services_2, Services_3, Media_1, RideSharing_1, RideSharing_2, | | | Travel_1, Hotels_1, Hotels_2, Hotels_3, Fl
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f119e0f5-d394-469d-a4e1-d80fff7d93be
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.3 More Results | Rastogi et al. | | Services_1, Services_2, Services_3, Media_1, RideSharing_1, RideSharing_2, | | | Travel_1, Hotels_1, Hotels_2, Hotels_3, Flights_1, Flights_2, Restaurants_1, | | | Calendar_1, Music_1, Music_2, Weather_1, Movies_1, Homes_1, Banks_1 | | | CamRest676 ( | Wen et al. | | MSR-E2E ( | Li et al. | | TaskMaster ( | Byrne et al. | | WOZ (
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f8f7c501-5aa9-4c41-b8f2-6fb9823be89f
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.3 More Results | Li et al. | | TaskMaster ( | Byrne et al. | | WOZ ( | | | Mrkši´c et al. | | | , | 2016 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5b1fc8de-9cf8-41ca-aa4a-baf5540af460
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## System Prompt The results, depicted in Figure 6, emphasize the effectiveness of embedding function calls within the conversation context. Impact of function specification types In addition to directly including function specifications in JSON within the prompt, we experimented with translating the data into more human-readable natural language descriptions. Figure 6 presents a comparison between using the JSON format directly (json) and converting it into natural language descriptions (text). The results indicate that the models perform similarly with both methods of function specification. The parts surrounded in brackets and highlighted in blue serve as placeholders and are replaced with specific function specifications and example conversations related to that function/domain. The example part is only employed for few-shot prompting with the models not fine-tuned for functioncalling.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e1282637-72fe-414b-bcec-4794505f208a
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.4 Prompts Conversation Context We adopted the specific chat format for each evaluated LLM used in their fine-tuning, regarding how the conversation is formatted within the prompt.5 System prompt In our evaluation, we utilized the following system prompt template: Function Specifications For the function specification within the system prompt in the prompt, we adhere to ChatGPT's format. To enhance model comprehension, we also experimented with translating the JSON format into a natural language description to include in the system prompt. An example illustrating both the JSON format and its corresponding natural language description for a specific domain is depicted in Figure 7. Full Prompt Combining all components, an example of the full dialogue prompt is displayed in | " | name | ": "find_book_hotel", | |-----|-------------|-----------------------------------------------| | " | description | ": "hotel reservations and vacation stays. ", | | " | arguments | ": [ | | { | | | | " | name | ": "name", | | " | type | ": "string", | | " | description | ": "name of the hotel" | }, { | " | name
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
38d9c3a6-c60a-4c88-9382-ac896c83ca25
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.4 Prompts | | " | type | ": "string", | | " | description | ": "name of the hotel" | }, { | " | name | ": "pricerange", | |-----|-----------------|---------------------------------------| | " | type | ": "string", | | " | description | ": "price budget of the hotel", | | " | possible_values | ": ["expensive", "cheap", "moderate"] | }, …… ] } Natural Language Description (Text) Function **name**: find_book_hotel Function **description**: hotel reservations and vacation stays. Function **arguments**: - **name** (string): name of the hotel - **pricerange** (string): price budget of the hotel (must be one of expensive, cheap, moderate) - **people** (integer): number of people for the hotel booking (must be one of 1, 2, 3, 4, 5, 6, 7, 8) - **stay** (integer): length of stay at the hotel (must be one of 1, 2, 3, 4, 5, 6, 7, 8) - **stars** (integer): star rating of the hotel (must be one of 0, 1, 2, 3, 4, 5) - **internet** (boolean): whether the hotel has internet (must be one of free, no, yes) - **area** (string): area or place of the hotel (must be one of centre, east,
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4c472ea9-4a1b-41d1-b0a4-898a69b4fac5
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## A.4 Prompts , 4, 5, 6, 7, 8) - **stay** (integer): length of stay at the hotel (must be one of 1, 2, 3, 4, 5, 6, 7, 8) - **stars** (integer): star rating of the hotel (must be one of 0, 1, 2, 3, 4, 5) - **internet** (boolean): whether the hotel has internet (must be one of free, no, yes) - **area** (string): area or place of the hotel (must be one of centre, east, north, south, west) ……
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
df417d21-d8b1-4ba8-b048-a8d4bcd5af22
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## The Full Dialogue Prompt You are a task-oriented assistant. You can use the given functions to fetch further data to help the users. "name": "find_attraction", "description": "find touristy stuff to do around you. "arguments": [ { "name": "area", "type": "string", "description": "area to search for attractions", "possible_values": ["centre", "east", "north", "south", "west"] }, { "name": "name", "type": "string", "description": "name of the attraction" "name": "find_book_hotel", "description": "hotel reservations and vacation stays, "arguments": [ { "name": "name", "type": "string", "description": "name of the hotel" }, { "name": "pricerange", "type": "string", "description": "price budget of the hotel", "possible_values": ["expensive", "cheap", "moderate"] }, { "name": "parking", "type": "boolean", "description": "whether the hotel has parking", "possible_values": ["free", "no", "yes"] }, …… } </FUNCTION> </FUNCTIONS> To call a function with a JSON object of the following format: {"function": "function_name", "arguments": {"argument1": "argument_value", "argument2": "argument_value"}} User: i would like information about museums in cambridge . Assistant: <function_call> {"function": "find_attraction", "arguments": {"type": "museum"}} </function_call> there are 23 museums in cambridge . what part of town will you be in ? User: can you tell me about museums in the centre of town ? Assistant: <function_call> {"function": "find_attraction", "arguments": {"type": "museum", "area": "centre"}} </function_call> i have the castle galleries museum in the centre of town . phone number is 01223307402 . User: i also want to find the hotel called archway house . Assistant: <function_call> {"function": "find_book_hotel", "arguments": {"name
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
33e14682-afef-4259-af33-296e08039186
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## The Full Dialogue Prompt 23 museums in cambridge . what part of town will you be in ? User: can you tell me about museums in the centre of town ? Assistant: <function_call> {"function": "find_attraction", "arguments": {"type": "museum", "area": "centre"}} </function_call> i have the castle galleries museum in the centre of town . phone number is 01223307402 . User: i also want to find the hotel called archway house . Assistant: <function_call> {"function": "find_book_hotel", "arguments": {"name": "archway house"}} </function_call> it is located in the north , on 52 gilbert road , cb43pe . would you like to book a room ? User: yes , please book for 4 nights on friday for 4 people . Assistant: <function_call> {"function": "find_book_hotel", "arguments": {"name": "archway house", "area": "north", "people": "4", "day": "friday", "stay": "4"}} </function_call> all set , the reference number is 6s9gcfdo , can i help with anything else ? User: that is all . thank you so much for your help . Assistant: have a great day ! goodbye !
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
34c00804-0e6c-47d1-a1e4-5b5db4794fa3
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models Martha Lewis (martha.lewis@bristol.ac.uk) School of Engineering Mathematics and Technology, University of Bristol, Bristol BS8 1TW, UK Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA Melanie Mitchell (mm@santafe.edu) Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2e960d02-a412-4830-8d12-182f898bd21c
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Abstract Large language models (LLMs) have performed well on several reasoning benchmarks, including ones that test analogical reasoning abilities. However, it has been debated whether they are actually performing humanlike abstract reasoning or instead employing less general processes that rely on similarity to what has been seen in their training data. Here we investigate the generality of analogy-making abilities previously claimed for LLMs (Webb, Holyoak, & Lu, 2023). We take one set of analogy problems used to evaluate LLMs and create a set of "counterfactual" variants—versions that test the same abstract reasoning abilities but that are likely dissimilar from any pre-training data. We test humans and three GPT models on both the original and counterfactual problems, and show that, while the performance of humans remains high for all the problems, the GPT models' performance declines sharply on the counterfactual set. This work provides evidence that, despite previously reported successes of LLMs on analogical reasoning, these models lack the robustness and generality of human analogy-making. Keywords: Analogy; Reasoning; Letter-String Analogies; Large Language Models Introduction The degree to which pre-trained large language models (LLMs) can reason—deductively, inductively, analogically, or otherwise—remains a subject of debate in the AI and cognitive science communities. Many studies have shown that LLMs perform well on certain reasoning benchmarks (Huang & Chang, 2022; Wei, Tay, et al., 2022; Wei, Wang, et al., 2022). However, other studies have questioned the extent to which these systems are able to reason abstractly, as opposed to relying on "approximate retrieval" from encoded training data (Kambhampati, 2023), a process which yields "narrow, non-transferable procedures for task solving" (Wu et al., 2023). Several groups have shown that LLMs' performance on reasoning tasks degrades, in some cases quite dramatically, on versions of the tasks that are likely to be rare in or outside of the LLMs' training data (Dziri et al., 2023; McCoy, Yao, Friedman, Hardy, & Griffiths, 2023; Razeghi, Logan IV, Gardner, & Singh, 2022; Wu et al., 2023). In particular, Wu et al. (2023) proposed evaluating the robustness and generality of LLMs' reasoning ability by testing them
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d911e3f3-98dd-48a4-ace3-794a0f948c70
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Abstract solving" (Wu et al., 2023). Several groups have shown that LLMs' performance on reasoning tasks degrades, in some cases quite dramatically, on versions of the tasks that are likely to be rare in or outside of the LLMs' training data (Dziri et al., 2023; McCoy, Yao, Friedman, Hardy, & Griffiths, 2023; Razeghi, Logan IV, Gardner, & Singh, 2022; Wu et al., 2023). In particular, Wu et al. (2023) proposed evaluating the robustness and generality of LLMs' reasoning ability by testing them not only on *default tasks* that are likely similar to ones seen in training data, but also on *counterfactual tasks*, ones that "deviate from the default, generally assumed conditions for these tasks" and are unlikely to resemble those in training data. If an LLM is using general abstract reasoning procedures, it should perform comparably on both default and counterfactual tasks; if it is using procedures that rely on similarity to training data, the performance should drop substantially on the counterfactual versions. Here we use this counterfactual-task approach to evaluate the claim that LLMs have general abilities for abstract analogical reasoning. In particular, we focus on the results reported by Webb et al. (2023) on the abilities of GPT-3 to solve letter-string analogy problems. We develop a set of counterfactual variants on the letter-string analogy problems used by Webb et al., in which similar problems are posed with nonstandard alphabets—ones that are either permuted to various degrees, or that are composed of non-letter symbols. We argue that a system able to reason by analogy in a general way would have comparable performance on the original and counterfactual versions of these problems. We test both humans and different GPT models, and show that while humans exhibit high performance on both the original and counterfactual problems, the performance of all GPT models we tested degrades on the counterfactual versions. These results provide evidence that LLMs' analogical reasoning still lacks the robustness and generality exhibited by humans.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
76208e45-b7c2-44ef-a640-7d2858ce3155
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Background And Related Work Letter-string analogies were proposed by Hofstadter (1985) as an idealized domain in which processes underlying human analogy-making could be investigated. One example problem is the following: a b c d $\rightarrow$ a b c e ; i j k l $\rightarrow$? Here, a b c d $\rightarrow$ a b c e is called the "source transformation" and i j k l is called the "target." The solver's task is to generate a new string that transforms the target analogously to the source transformation. There is no single "correct" answer to such problems, but there is typically general agreement in how humans answer them. For example, for the problem above, most people answer i j k m, and answers that deviate from this tend to do so in particular ways (Mitchell, 1993). In addition to the work of Hofstadter and Mitchell (1994) on creating computer models of analogy-making using this domain, letter-string analogies have been used to isolate the neural correlates of analogy formation (Long et al., 2015; Geake & Hansen, 2010), and as a model of higher-order analogy retrieval (Dekel, Burns, & Goldwater, 2023). Webb et al. (2023) compared the ability of GPT-3 with that of humans on several analogical reasoning domains, including letter-string analogies. On the letter-string domain, they found that in most cases GPT-3's performance exceeded the average performance of the human participants, where performance is measured as fraction of "correct" answers on a given set of letter-string problems. As we mentioned above, such problems do not have a single correct answer, but Webb et al. used their intuitions to decide which answer displays abstract analogical reasoning and thus should be considered "correct." In this paper we will use their definition of correctness. Hodel and West (2023) tested GPT-3 with two types of counterfactual variations on the letter-string analogy problems used by Webb et al.: ones that include larger intervals between letters, and ones with randomly shuffled alphabets. They found that GPT-3 performed poorly on both variations. Here we experiment with similar, but more systematic variations, and we compare the performance of three different GPT models with that of humans on these variations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cb061bc7-f3f8-4c7f-9f0c-a87aa706c54b
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Original Analogy Problems Webb et al. (2023) proposed a set of problem types involving different types of transformations and different levels of generalization, on which they tested humans and GPT-3. The following are the six transformation types with sample transformations: 1. Extend sequence: a b c d → a b c d e 2. Successor: a b c d → a b c e 3. Predecessor: b c d e → a c d e 4. Remove redundant letter: a b b c d → a b c d 5. Fix alphabetic sequence: a b c w e → a b c d e 6. Sort: a d c b e → a b c d e Each type of transformation can be paired with a simple target (e.g., i j k l) or with the following types of generalizations: 1. Letter-to-number: a b c d → a b c e ; 1 2 3 4 → ? 2. Grouping: a b c d → a b c e ; i i j j k k l l → ? 3. Longer target: a b c d → a b c e ; i j k l m n o p → ? 4. Reversed order: a b c d → a b c e ; l k j i → ? 5. Interleaved distractor: a b c d → a b c e ; i x j x k x l x → ? 6. Larger interval: a b c d → a b c e ; i k m o → ? Finally, Webb et al. include a number of problems involving "real-world" concepts, such as, $${\mathrm{a~b~c\to a~b~d}}\;\;;\;\;\;{\mathrm{cold~cool~warm}}\rightarrow\;\;\;?$$ Webb et al. generated 100 problems for each problem type and presented these to GPT-3 (text-davinci-003). Webb et al. also tested 57 UCLA undergraduates on the same problems. The human participants exhibited a large variance in accuracy, but on average, Webb et al. found that GPT-3 outperformed the human participants on most problem types. Due to the costs of human and computer experiments, we focus here on problems with simple targets (
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
70bbb823-505c-4c38-832a-437a28644eab
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Original Analogy Problems \;\;\;{\mathrm{cold~cool~warm}}\rightarrow\;\;\;?$$ Webb et al. generated 100 problems for each problem type and presented these to GPT-3 (text-davinci-003). Webb et al. also tested 57 UCLA undergraduates on the same problems. The human participants exhibited a large variance in accuracy, but on average, Webb et al. found that GPT-3 outperformed the human participants on most problem types. Due to the costs of human and computer experiments, we focus here on problems with simple targets (i.e., no numbers, grouping, etc. in the target string). Webb et al. called this the "zero generalization setting"; it is the setting in which both humans and GPT-3 performed best. Extending to problems with different generalization types is a topic for future work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7a85a56-c0e0-46bb-85f9-0e9e037fc555
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems To create our dataset of counterfactual problems, we did the following. First, we generated permuted alphabets, in which we reorder n letters, where n can be 2, 5, 10, or 20. For each of the four values of n, we generated seven distinct alphabets with n randomly chosen letters reordered. Then, for each of these alphabets, we created 10 different analogy problems for each of Webb et al.'s six transformation types. This results in 7 × 10 × 6 = 420 analogy problems for each value of n. We added to this 420 analogy problems using the non-permuted (n = 0) alphabet, spread evenly over the six transformation types. Figure 1 gives an example of a Fix Alphabetic Sequence problem using an alphabet with two letters (e and m) reordered. As a final set of counterfactual problems, we generated two non-letter symbol alphabets and used them to create 10 problems each for the Successor and Predecessor problem types, for a total of 40 unique non-letter symbol problems. Figure 2 gives an example of a Predecessor problem using a symbol alphabet. Our dataset of counterfactual problems, along with code for generating them, is available at https://github .com/marthaflinderslewis/counterfactual analogy. Human Study Methods In order to assess humans' abilities on the original and counterfactual letter-string problems, we collected data from 136 participants on Prolific Academic.1 Participants were screened to have English as a first language, to currently be living in the UK, the USA, Australia, New Zealand, Canada, or Ireland, and to have a 100% approval rate on Prolific. Each participant was asked to solve 14 letter-string analogy problems: one problem from each of the six transformation types for each of two (randomly chosen) alphabets (chosen from alphabets with n *∈ {*0,2,5,10,20}), as well as one Successor and one Predecessor problem for a randomly chosen symbol alphabet. Figures 1–2 show the format on which the problems appeared on the participants' screens. In addition to the 14 problems, participants were also given two attention-check questions at random points during the experiment, with a warning that if the attention checks were failed, then payment ($7 for the experiment)
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
28ef2cf1-db3f-40df-9b44-6a81c19e0f34
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems problem from each of the six transformation types for each of two (randomly chosen) alphabets (chosen from alphabets with n *∈ {*0,2,5,10,20}), as well as one Successor and one Predecessor problem for a randomly chosen symbol alphabet. Figures 1–2 show the format on which the problems appeared on the participants' screens. In addition to the 14 problems, participants were also given two attention-check questions at random points during the experiment, with a warning that if the attention checks were failed, then payment ($7 for the experiment) would be withheld. Figure 3 gives an example of an attention check. Two of the 136 participants' submissions were rejected due to failed attention checks. As in Webb et al. (2023), as part of the initial instructions participants were given a simple example problem to complete and then were shown the solution. GPT Study Methods We evaluated the performance of three LLMs—GPT-3 (textdavinci-003), GPT-3.5 (gpt-3.5-turbo-0613), and GPT-4 (gpt- 4-turbo-0613)—on the same problems given to humans. Following Webb et al. (2023), all GPT experiments were done with temperature set to zero. GPT-3.5 and GPT-4 require a slightly different input to GPT-3. GPT-3 takes in a single prompt, whereas GPT-3.5 and GPT-4 take in a list of messages that define the role of the system, input from a 'user' role, and optionally some dialogue with simulated responses from the model given under the role 'assistant'. 1https://www.prolific.com/academic-researchers Baseline In the baseline setting, we evaluated the performance of GPT-3, GPT-3.5 and GPT-4 on the zerogeneralization problems provided by Webb et al. (2023). For GPT-3.5 and GPT-4, our system and user prompts have the following format: System: You are able to solve letter-string analogies. User: Let's try to complete the pattern:\n\n[a b c d] [a b c e]\n[i j k l] [ The user
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
835c47fc-ad12-492c-b31f-2f9b86902f1f
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems researchers Baseline In the baseline setting, we evaluated the performance of GPT-3, GPT-3.5 and GPT-4 on the zerogeneralization problems provided by Webb et al. (2023). For GPT-3.5 and GPT-4, our system and user prompts have the following format: System: You are able to solve letter-string analogies. User: Let's try to complete the pattern:\n\n[a b c d] [a b c e]\n[i j k l] [ The user prompt is identical to the prompt Webb et al. gave to GPT-3; the \n character signifies a line break to the model. In this and all other experiments, we tested GPT-3 with a concatenation of the system and user prompts. Following Webb et al., in our experiments all GPT model responses were truncated at the point where a closing bracket was generated. Counterfactual Comprehension Check For problems involving permuted alphabets, we follow Wu et al. (2023) by providing counterfactual comprehension checks (CCCs) to check that the models understand the task proposed. We use two CCCs: firstly, given an alphabet and a sample letter (or symbol) from that alphabet, give the successor of that letter. Secondly, we use the same format but ask for the predecessor of the letter. We ensure that we do not ask for the successor of the last letter in the alphabet or the predecessor of the first. The prompts for these checks have the following format: System: You are able to solve simple letter-based problems. User: Use this fictional alphabet: [a u c ....]. \nWhat is the next letter after a?\nThe next letter after a is: System: You are able to solve simple letter-based problems. User: Use this fictional alphabet: [a u c ....]. \nWhat is the letter before c?\nThe letter before c is: Counterfactual Analogy Problems To evaluate the performance of GPT models, we tested several prompt formats, including one similar to instructions given in our human study. The best performance across models was achieved with the prompt format used in Hodel and West (2023): System: You are able to solve letter-string analogies. User: Use this
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a208dd1-7f6a-4741-8415-2b23a2b41e82
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems \nThe next letter after a is: System: You are able to solve simple letter-based problems. User: Use this fictional alphabet: [a u c ....]. \nWhat is the letter before c?\nThe letter before c is: Counterfactual Analogy Problems To evaluate the performance of GPT models, we tested several prompt formats, including one similar to instructions given in our human study. The best performance across models was achieved with the prompt format used in Hodel and West (2023): System: You are able to solve letter-string analogies. User: Use this fictional alphabet: [a u c d e f g h i j k l m n o p q r s t b v w x y z]. \nLet's try to complete the pattern:\n[a u c d] [a u c e]\n[i j k l] [ The results we report here for our tests of the GPT models all use this prompt. Note that in our studies, the "fictional alphabet" part of the prompt and the alphabet listing was included even for problems using the non-permuted (n = 0) alphabet. Results Human Experiments Figure 4 compares our behavioral data with that of Webb et al. The participants in our study achieved higher average accuracy than those of Webb et al. (abbreviated as "Webb" in the figure). We do, however, see a similar pattern of performance between our participants and theirs. Baseline. The baseline setting looks at the performance of different GPT models on the zero-generalization problems designed by Webb et al. Figure 5 shows GPT-3 data from Webb et al. ("GPT-3 Webb") compared with data from our computational experiments with all three models. Our results are similar to those of Webb et al., with notable differences on the Predecessor and Fix Alphabet problems for GPT-3, differences on Remove Redundant, Fix Alphabet, and Sort for GPT-3.5, and on Remove Redundant and Sort for GPT-4. Counterfactual Comprehension Check For the nonpermuted alphabet, each permuted alphabet, and for the two symbol alphabets, we performed the Successor and Predecessor CCCs on each letter (or symbol) as described above. Results for these CCCs on are reported in
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9ea5a973-8c58-403d-94c6-dc3bde8690cf
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems similar to those of Webb et al., with notable differences on the Predecessor and Fix Alphabet problems for GPT-3, differences on Remove Redundant, Fix Alphabet, and Sort for GPT-3.5, and on Remove Redundant and Sort for GPT-4. Counterfactual Comprehension Check For the nonpermuted alphabet, each permuted alphabet, and for the two symbol alphabets, we performed the Successor and Predecessor CCCs on each letter (or symbol) as described above. Results for these CCCs on are reported in Table 1. We see that accuracy is generally high, indicating that the models generally understand the concept of a permuted alphabet and what "successor" and "predecessor" mean in the new ordering. One exception is the ability of GPT-3.5 to understand "predecessor" in alphabets with two or five letters permuted. Counterfactual Analogy Problems: Comparisons Between Humans and GPT Models Table 2 gives the mean accuracy and 95% binomial confidence intervals for humans and GPT models across all alphabets and problem types. It is clear that average human performance on these problems is significantly higher than that of any of the GPT models. Figure 6 shows the performance of human participants and GPT models, averaged across problem types for each kind of alphabet. We again see that human performance is signifi- n = 0 n = 2 n = 5 n = 10 n = 20 Symbol U P U P U P U P U U Succ GPT-3 1.00 0.82 1.00 0.76 1.00 0.85 1.00 0.84 1.00 1.00 GPT-3.5 1.00 0.89 0.99 0.95 1.00 0.99 0.98 0.98 1.00 0.94 GPT-4 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 Total items 175 28 147 62 113 111 64
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a9b873ef-c6a1-4c5f-b88d-b5b088600a87
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems .85 1.00 0.84 1.00 1.00 GPT-3.5 1.00 0.89 0.99 0.95 1.00 0.99 0.98 0.98 1.00 0.94 GPT-4 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.00 Total items 175 28 147 62 113 111 64 164 11 18 Pred GPT-3.5 1.00 0.49 1.00 0.72 1.00 0.92 1.00 0.87 1.00 0.94 GPT-4 1.00 0.93 1.00 0.97 0.99 1.00 1.00 0.98 1.00 1.00 Total items 175 28 147 60 115 109 66 163 12 18 | Accuracy | 95% Binomial Conf. | |------------|----------------------| | Humans | 0.753 | | [ | | | 0 | | | . | | | 734 | | | , | | | 0 | | | .
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1299a5d0-e82b-489b-97fd-b6d8798b7240
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems | | 734 | | | , | | | 0 | | | . | | | 773 | | | ] | | | GPT-3 | 0.488 | | [ | | | 0 | | | . | | | 467 | | | , | | | 0 | | | . | | | 509
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e3eab367-ac58-4fd0-976a-3efb91f8c303
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems | | , | | | 0 | | | . | | | 509 | | | ] | | | GPT-3.5 | | | 0.350 | | | [ | | | 0 | | | . | | | 330 | | | , | | | 0 | | | . | | | 370 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fe2a1cfc-8b32-464b-93a9-40686983903b
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems | | , | | | 0 | | | . | | | 370 | | | ] | | | GPT-4 | 0.452 | | [ | | | 0 | | | . | | | 431 | | | , | | | 0 | | | . | | | 473 | | | ] |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
921e8875-1816-4bf0-b2a3-43a6c979d3f9
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Counterfactual Analogy Problems | | 0 | | | . | | | 473 | | | ] | | cantly above the performance of each GPT model, across all types of alphabets, and, most notably, stays relatively constant across alphabets with different numbers of letter permutations. In contrast, the GPT models show a more dramatic drop for the counterfactual problems. Figure 7 breaks down performance over each problem type by alphabet. We again see a clear difference between human and GPT performance across all problem types (with the exception of Add Letter and in the case of GPT-3, Remove Redundant), and we see that while human performance does not substantially decrease over types of alphabet, the GPT models typically experience such decreases.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
27807c11-ea23-4b00-9ba1-24842a6c061a
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Summary Of Results We see a number of phenomena here. The performance of our human participants on Webb et al.'s original problems was higher than that of participants in Webb et al.'s experiments (Figure 4). This may be due to differences in experimental protocols or in the participant pools. Our computational data collected from GPT models is roughly similar to that of Webb et al., but differs slightly when we look at performance at the problem-type level (Figure 5). In contrast to Webb et al., we find that performance by GPT models on the original problems is generally lower than average human performance. And for our counterfactual problems, unlike humans, GPT models exhibit a decrease in performance going from a standard alphabet to permuted alphabets, and another sharp decrease going from alphabetic sequences to symbolic sequences (Figure 6). This implies that GPT models are substantially less robust than humans on letter-string analogy problems involving sequences unlikely to be in their training data, which challenges the claim that these models are performing a general kind of analogical reasoning when solving these problems.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6c9da50c-a572-4b97-8997-a706293fe371
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Error Analysis A crucial aspect of letter-string analogy problems is that they do not necessarily have a "correct" answer, although, as we mentioned above, humans generally agree on what are the "best" rules describing letter-string transformations in this domain. However, there are other rules that can be inferred from a given pair of letter strings. We therefore examined the "incorrect" answers of humans and of GPT-3 and 4 to ascertain whether the kinds of errors made are similar. For both GPT-3 and GPT-4, we randomly selected five incorrect answers from each problem type and alphabet, giving a sample of approximately 160 incorrect responses per GPT model. This number can be lower if there were fewer than 5 incorrect responses for a problem type and alphabet. For humans, we selected 184 incorrect answers. By manually examining these selections, we identified four broad categories of errors: 1) **Alternate rule formation**, where the answer given is consistent with an alternative rule. For example, if we have source transformation [a b c d] [a b c e] with target [i j k l], then according to the Successor rule the answer [i j k m] is correct. However, the answer [i j k e] is consistent with the rule "Replace the last letter with 'e"'. 2) Incorrect rule use, in which the answer given is clearly related to the target letter string, and some kind of rule has been applied, but the rule is inconsistent with the example pair. For example, for [a b c d] [a b c e] with [i j k l], the response [i j k l m] is given. 3) **Wrong**, in which the answer given is inconsistent with the expected answer, but related to the target letter string. We could not discern any clear misunderstanding or alternate rule use. For example, for [a b c d] [a b c e] with [i j k l], the response [i j k q] is given. 4) **Completely Wrong**, in which the answer given is inconsistent with the expected answer, and unrelated to the target letter string. Again, we could not discern any clear misunderstanding or alternate rule use. For example, for [a b c d] [a b c e] with [i j k l], the response [z y x b] is given. Table 4 gives percentages for each error type for humans
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
211bae5b-0c90-44c5-8150-4b7dd5903a2e
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Error Analysis any clear misunderstanding or alternate rule use. For example, for [a b c d] [a b c e] with [i j k l], the response [i j k q] is given. 4) **Completely Wrong**, in which the answer given is inconsistent with the expected answer, and unrelated to the target letter string. Again, we could not discern any clear misunderstanding or alternate rule use. For example, for [a b c d] [a b c e] with [i j k l], the response [z y x b] is given. Table 4 gives percentages for each error type for humans and for each model. We see that in humans a large percentage (38.59%) of errors stem from using alternate rules. This is also seen in GPT-4 to a lesser extent (22%), but much less in GPT-3 (5.81%). We also see a difference in the percentage of incorrect rules applied, with GPT 3 and 4 both having over 30% of errors in this category and humans having around 15% of errors in this category. GPT models also have a higher percentage in the Wrong category, and for each of the models this category is the largest across the errors they made. Humans have a larger percentage of errors in the Completely Wrong category than do GPT-3 and 4 however. Across these four broad categories GPT-3 and 4 make different patterns of errors than humans. We can further look at the kinds of alternative rules that are used by humans and by GPT. One key type of alternative rule is where a 'literal' interpretation of a rule is applied, illustrated in Table 3. As well as literal rules, humans found alternative rules for the Fix Alphabet problem type: they would interpret the changed letter as being moved a certain number of steps in the alphabet, and would move an equivalent letter in the prompt the same way. Usually "equivalent" means position; sometimes it means the identity of the letter. We find that GPT-4 gives the same kind of literal responses that humans do, but does not use alternative rules other than literal responses. GPT-3 has a limited number of errors in this category, and almost all are literal responses to Remove Redundant. In summary, within the "Alternative Rule" category, the GPT models found literal rules in the same way humans did, but did not find more inventive alternative rules. Breaking down the Incorrect Rule category, we see more
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
01dcd0ca-53b5-4385-a561-64ed141a9d06
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Error Analysis letter in the prompt the same way. Usually "equivalent" means position; sometimes it means the identity of the letter. We find that GPT-4 gives the same kind of literal responses that humans do, but does not use alternative rules other than literal responses. GPT-3 has a limited number of errors in this category, and almost all are literal responses to Remove Redundant. In summary, within the "Alternative Rule" category, the GPT models found literal rules in the same way humans did, but did not find more inventive alternative rules. Breaking down the Incorrect Rule category, we see more differences between human and GPT behavior. Human responses in this category are mostly where one of the rules has been applied in an incorrect situation, for example Add Letter has been applied instead of Successor. GPT-3 errors include adding two letters instead of one; continuing the alphabet; reversing the target; shifting the target; using an unpermuted alphabet instead of the one given; and repeating the target. GPT-4 made these mistakes and also generated responses that were too long. Very few humans made any of these mistakes. Out of the incorrect responses, the types of response made by humans and GPT models are very different.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dc754489-a461-4e10-be40-e3144cd1742f
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Discussion Our aim was to assess the performance of LLMs in "counterfactual" situations unlikely to resemble those seen in training data. We have shown that while humans are able to maintain a strong level of performance in letter-string analogy problems over unfamiliar alphabets, the performance of GPT models is not only weaker than humans on the Roman alphabet in its usual order, but that performance drops further when the alphabet is presented in an unfamiliar order or with non-letter symbols. This implies that the ability of GPT to solve this kind of analogy problem zero-shot, as claimed by Webb et al. (2023), may be more due to the presence of similar kinds of sequence examples in the training data, rather than an ability to reason by abstract analogy when solving these problems. We further see that the models GPT-3.5 and GPT-4 are no better than GPT-3 at solving these analogy problems, and in the case of GPT-3.5 are worse. GPT-3.5 and 4 have been trained to chat like a human, and this training objective may have reduced the ability to solve letter-string analogies.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f1713d53-37c0-4afe-92b7-4baa0f82c6c6
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Conclusions We have shown that, contra Webb et al. (2023), GPT models perform worse than humans, on average, in solving letterstring analogy tasks using the normal alphabet. Moreoever, when such tasks are presented with counterfactual alphabets, | Problem Type | Source | Target | Literal Answer | Explanation | |----------------|---------------------------|---------------|------------------|----------------------------------| | Succ | [f g h i] [f g h j] | [e f g h] | [e f g j] | Replace last letter with 'j'. | | Fix | [b f g h i] [e f g h i] | [h i r k l] | [e i r k l] | Replace first letter with 'e'. | | Rem | [g g h i j k] [g h i j k] | [k l m n n o] | [l m n n o] | Remove first letter of sequence. | | Sort | [b c f e d] [b c d e f] | [v t u s w] | [v t w s u] | Swap 3rd and 5th letters. | Table 4: % error types across GPT-3, GPT-4, and Human Alt rule Incorrect Rule Wrong Completely Wrong GPT-3 5.81% 30.97% 55.48% 7.74% GPT-4 22.00% 32.67% 42.67% 2.67% Human 38.59% 14.67% 34.24% 12.50%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1456c70f-88e6-42bd-a0d9-91d70828150a
# Using Counterfactual Tasks To Evaluate The Generality Of Analogical Reasoning In Large Language Models ## Conclusions | Swap 3rd and 5th letters. | Table 4: % error types across GPT-3, GPT-4, and Human Alt rule Incorrect Rule Wrong Completely Wrong GPT-3 5.81% 30.97% 55.48% 7.74% GPT-4 22.00% 32.67% 42.67% 2.67% Human 38.59% 14.67% 34.24% 12.50% these models display drops in accuracy that are not seen in humans, and the kinds of mistakes that these models make are different from the kinds of mistakes that humans make. These results imply that GPT models are still lacking the kind of abstract reasoning needed for human-like fluid intelligence. In future work we hope to extend our investigation to both the letter-string generalizations and the other analogical reasoning domains studied by Webb et al. (2023). This work does not probe into how either humans or GPT form responses to these problems. Future work in this area could be to interrogate both humans and LLMs on their justifications for a particular answer. Another avenue for exploration is to investigate performance in a few-shot setting, as here, newer models may come into their own. Acknowledgments This material is based in part upon work supported by the National Science Foundation under Grant No. 2139983. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. This work has also been supported by the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under the grant https://doi.org/10.54224/20650.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08955v1.md", "file_path": "paper_data/2402.08955v1.md", "file_size": 30918, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
668a1c40-8532-4b59-8ae2-40d5aa8f1d6b
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild Ziyu Zhao1, Leilei Gan1, Guoyin Wang2, Wangchunshu Zhou3, Hongxia Yang2, Kun Kuang1, Fei Wu1 benzhao.styx@gmail.com, {leileigan, kunkuang, wufei}@zju.edu.cn, guoyinwang.duke@gmail.com, hx.yang@bytedance.com, chunshu@aiwaves.cn, 1Zhejiang University, 2ByteDance Inc., 3AIWaves Inc.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6962265d-7eaa-437c-af67-6172dbc20304
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## Abstract Low-Rank Adaptation (LoRA) provides an effective yet efficient solution for fine-tuning large language models (LLMs). The modular and plug-and-play nature of LoRA enables the integration of diverse domain-specific LoRAs to enhance the capabilities of LLMs. Previous research on exploiting multiple LoRAs either focuses on specific isolated downstream tasks or fixes the selection of LoRAs during training. However, in real-world scenarios, LLMs receive diverse prompts covering different tasks, and the pool of candidate LoRAs is often dynamically updated. To bridge this gap, we propose LoraRetriever, a retrieve-then-compose framework that adaptively retrieves and composes multiple LoRAs according to the input prompts. LoraRetriever contains three main components: firstly, identifying and retrieving LoRAs relevant to the given input; secondly, formulating strategies for effectively integrating the retrieved LoRAs; and thirdly, developing efficient batch inference to accommodate heterogeneous requests. Experimental results indicate that LoraRetriever consistently outperforms the baselines, highlighting its practical effectiveness and versatility.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1d26039e-f9e3-4df0-a57f-0e98a0beb877
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 1 Introduction Recently, large language models (LLMs) such as ChatGPT (Liu et al., 2023b) and Llama (Touvron et al., 2023) have shown notable successes in a range of fields (Hadi et al., 2023; Wang et al., 2023). Nevertheless, due to the prohibitively high computation costs for fine-tuning LLMs on specific domains, there is a growing shift towards Parameter- Efficient Fine-Tuning (PEFT) (Liu et al., 2022; Hu et al., 2023, 2021), which only updates a small fraction of the model's parameters or integrates new trainable parameters that augment the model's capabilities. Within this sphere, Low-Rank Adaptation (LoRA) (Hu et al., 2021) stands out for its remarkable effectiveness, modularity, and plug-and-play capabilities. As AI communities like Huggingface and ModelScope witness an influx of LoRA parameters tailored for diverse tasks and domains, there is an increasing emphasis on employing various LoRA experts to provide a comprehensive service (Sheng et al., 2023). Recent research has explored the integration of the mixture of expert (MoE; Jacobs et al. (1991); Jordan and Jacobs (1994)) with LoRAs (Wang et al., 2022; Liu et al., 2023a; Zadouri et al., 2023; Muqeeth et al., 2023; Anonymous, 2024). However, these methods lock in the selection of Lo- RAs while training, lacking the ability to dynamically update and scale in scenarios where the LoRA pool may consistently expand. LoRAhub (Huang et al., 2023) and AdapterSoup (Chronopoulou et al., 2023) explore composing LoRAs for specific downstream tasks. However, these two methods offer a one-size-fits-all solution for downstream tasks, overlooking the heterogeneous nature of diverse real-world requests. To bridge these gaps, our paper explores the "mixed-task scenario" as exemplified by platforms like ChatGPT (Liu et al., 2023b) and Gemini (Team et al., 2023), wherein the nature of user requests encompasses diverse prompts covering different tasks. While LLMs present a unified solution for a broad spectrum of tasks, their performance can still falter in certain specialized areas(Liu et al., 2023a; Yang et al
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3d1909af-73c4-417d-aa73-04991e4c47fb
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 1 Introduction two methods offer a one-size-fits-all solution for downstream tasks, overlooking the heterogeneous nature of diverse real-world requests. To bridge these gaps, our paper explores the "mixed-task scenario" as exemplified by platforms like ChatGPT (Liu et al., 2023b) and Gemini (Team et al., 2023), wherein the nature of user requests encompasses diverse prompts covering different tasks. While LLMs present a unified solution for a broad spectrum of tasks, their performance can still falter in certain specialized areas(Liu et al., 2023a; Yang et al., 2023). This is where the integration of LoRAs becomes crucial. As shown in Fig.1, our vision encompasses a multi-LoRA serving framework capable of dynamic enhancement, continuously improving its functionality as new LoRA modules are added and updated. Through the plugand-play capabilities of LoRA, the framework can provide personalized services for heterogeneous downstream requests. In this paper, we introduce LoraRetriever, a retrieve-then-compose framework designed to exploit the plug-and-play nature of LoRA in mixedtask scenarios. Our framework consists of three key components: (1) Input-aware LoRA Retrieval: The first step of our framework is aligning user inputs with the corresponding LoRAs through sentence embeddings and is further refined by an instruction fine-tuning (Su et al., 2022; Asai et al., 2022) for effective LoRA retrieval. Through the retriever, we achieve a more flexible LoRA routing mechanism, whose training stage is disentangled from the training and inference of the LLM. (2) **LoRA Composition:** Our framework next employs two strategies for compositing the retrieved LoRAs in the first step. The *Fusion of LoRAs* averages multiple LoRAs' parameters and constructs a singular comprehensive model for each input. The *Mixture of LoRAs* activates multiple LoRAs simultaneously and then averages the output of each submodule of the LoRAs. Composing topk LoRAs increases the recall rate for the correct LoRA and improves the generalization of unseen tasks by integrating the LoRAs of similar tasks. (3) **Batch Inference:** Most previous work on the input-adaptive inference of LLMs does not support batch inference (Zhou et al.,
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
040384e4-8894-4fb2-935a-22381d950305
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 1 Introduction Fusion of LoRAs* averages multiple LoRAs' parameters and constructs a singular comprehensive model for each input. The *Mixture of LoRAs* activates multiple LoRAs simultaneously and then averages the output of each submodule of the LoRAs. Composing topk LoRAs increases the recall rate for the correct LoRA and improves the generalization of unseen tasks by integrating the LoRAs of similar tasks. (3) **Batch Inference:** Most previous work on the input-adaptive inference of LLMs does not support batch inference (Zhou et al., 2020; Chronopoulou et al., 2023). To tackle the challenge of heterogeneous batched requests, we construct a unique LoRA mapping matrix for batch samples. This allows for tailored inferences through efficient matrix multiplication, ensuring each request activates its corresponding LoRAs while maintaining batch processing efficiency. To assess the performance of LoraRetriever, we established a mixed-task evaluation benchmark comprising 48 LoRAs spanning a variety of natural language understanding and generation tasks. The experimental results underline the effectiveness of the proposed methods in serving both in-domain and out-of-domain downstream requests. Furthermore, the retrieval routing method exhibits a robust generalization capability: although the retriever is trained on just 40% of the tasks, it effectively retrieves the corresponding LoRAs for unseen tasks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
650ec5a3-3155-4dc4-b417-31f70ea8d120
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 2 Related Work Mixture of Experts. The Mixture of Experts (MoE) method combines various specialized submodules, guided by a gating network to tailor responses to different input types (Jacobs et al., 1991; Jordan and Jacobs, 1994; Shen et al., 2023; Riquelme et al., 2021; Dou et al., 2023). Some work (Wang et al., 2022; Zadouri et al., 2023; Zhu et al., 2023; Liu et al., 2023a; Dou et al., 2023) focuses on using the MoE method for PEFT to achieve more effective and efficient model finetuning. Other work (Anonymous, 2024; Muqeeth et al., 2023) focuses on using MoE to coordinate existing LoRA experts without specifically training the experts' parameters. These methods require training additional parameters for gating and are limited to a fixed number of LoRAs, making them unsuitable for complex and dynamic scenarios. Our method can be seen as gating through a retriever, hence achieving flexibility and generalization on unseen experts. Adapter Merging. In addition to model ensembling through the MoE, there is an increasing focus on aggregating adapters from different domains through the method of Adapter Merging. Adapter- Soup (Chronopoulou et al., 2023) aggregates different adapters in the parameter space, allowing large language models to adapt to new domains without additional training. LoRAhub (Huang et al., 2023) employs random sampling of LoRA parameters from various domains and tasks, followed by black-box optimization to learn the weights of different LoRA parameters without involving model gradient backpropagation. These methods offer a one-size-fits-all solution for downstream tasks, which cannot be applied in the mixed-task scenario for providing personalized service.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
53e459ab-20d2-46de-a2f4-5863494e03ed
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 3 Preliminaries This section begins with a concise introduction to the Low-Rank Adaptation, followed by a detailed formalization of the mixed-task scenario.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9b0c0a2e-4768-48ed-bcdf-1cd32ac14c9a
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 3.1 Low-Rank Adaptation Directly fine-tuning large language models with all parameters is computationally intensive and is not feasible in low-resource scenarios. Based on the idea that only a small number of low-rank parameters need to be fine-tuned for sufficient performance in new domains, Hu et al. (2021) proposed the Low- Rank Adaptation, where the LoRA module can be combined with the pre-trained parameters in parallel for efficient inference. Specifically, given pre-trained weights W0 ∈ Rd×d of a sub-module of LLM, the LoRA adds an extra trainable weight matrix as W0 + ∆W = W0 + BA, where ∆W can be decomposed into two smaller matrices B ∈ Rd×r and A ∈ Rr×d, where r stands for the rank of ∆W. The forward pass can be modified as follows: $$x^{\prime}=W_{0}x+\Delta W x=W_{0}x+B A x,\quad(1)$$ where x ∈ Rd is the input and the x′ ∈ Rd denote the output.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0c25213f-ff1c-41c3-9929-8df4b3dea9e5
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 3.2 Problem Formulation In this part, we give a formal definition of the mixed-task scenario. Given an original LLM L, we have a set of k LoRAs, Φ = {ϕ1, ϕ2, · · · *, ϕ*k}, on the shelf, where each LoRA ϕi is trained on its corresponding task Ti. The mixed task inputs can be formulated as Tmix = {x, ∀x ∈ T1 ∨ T2 *· · · ∨* Tk}, where ∨ stands for the logical disjunction operator. Under the mixed-task scenario, given an input x ∈ Tmix without its task tag, the serving process can be written as: $$y=F(g(\Phi,x),x,\theta),\qquad\qquad(2)$$ where θ denotes the original parameters of LLM, g(Φ, x) represents the input-aware LoRA retrieval process and returns a set of retrieved LoRAs Φi. F(Φi, xi, θ) depicts the LoRA composition process that integrates the retrieved LoRAs as a plugin to the original LLM.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
608044c0-ff52-4c82-abc5-16139f7ad0bc
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 4 Loraretriever Framework In this section, we describe the LoraRetriever framework as shown in Fig.2 for serving multi- LoRAs in mixed-task scenarios. This framework contains three major components: the input-aware LoRA retrieval module (§4.1), the LoRA composition module (§4.2), and the batch inference strategy (§4.3).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
74d17bca-684d-4c1d-ab9e-55c01b5d2025
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 4.1 Input-Aware Lora Retrieval Our goal is to construct a LoraRetriever tailored to effectively retrieve the corresponding LoRAs for each input in scenarios where LoRAs are dynamically updated. However, existing approaches fall short of accurately identifying LoRAs under such conditions. MoE-based methods (Anonymous, 2024; Muqeeth et al., 2023) struggle to generalize when new LoRAs are introduced due to the fixed selection of LoRAs established during router training. Retrieval methods like sentence embedding (Reimers and Gurevych, 2019; Ni et al., 2021) or task embedding (Achille et al., 2019; Zhou et al., 2022) fail to map both samples and LoRA into a shared embedding space, limiting their effectiveness in input-aware LoRA retrieval. To achieve this goal, we propose to train a retriever via instruction fine-tuning (Su et al., 2022; Asai et al., 2022), namely LoraRetriever, which can retrieve suitable LoRAs from a massive LoRA pool for a given input sample. The fundamental concept behind LoraRetriever comprises two main steps: (i) First, to embed different task-specific Lo- RAs into embedding space for facilitating retrieval, we posit that each LoRA can be represented by some data points, which can be obtained by randomly choosing a dozen samples from the training dataset. Then we average their instruction embeddings to represent the embedding of each LoRA. (ii) To improve generalization for unseen LoRAs in LoRA retrieving, we train the retriever through instruction fine-tuning (Su et al., 2022; Wei et al., 2021) on a subset of all tasks. Training on a small subset of tasks is designed to simulate scenarios involving the integration of new LoRAs, thereby underscoring our method's generalization abilities via instruction fine-tuning. These two strategies enable the effective use of limited data distributions for input-aware retrieval and can be generalized to unseen LoRAs. Formally, with a sentence-embedding model E, input sequence x, and the instruction I for embedding purposes, the instructed embedding can be formulated as E(I ⊕ x), where ⊕ denotes the concatenation operation. In order to allow the embedding to capture the similarity between different tasks, the instruction is expressed as: "Represent the sentence for similar task
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6d2dfad6-f9a2-4193-967d-d2b00d042d75
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 4.1 Input-Aware Lora Retrieval underscoring our method's generalization abilities via instruction fine-tuning. These two strategies enable the effective use of limited data distributions for input-aware retrieval and can be generalized to unseen LoRAs. Formally, with a sentence-embedding model E, input sequence x, and the instruction I for embedding purposes, the instructed embedding can be formulated as E(I ⊕ x), where ⊕ denotes the concatenation operation. In order to allow the embedding to capture the similarity between different tasks, the instruction is expressed as: "Represent the sentence for similar task retrieval" . Each LoRA module is embedded with m randomly selected domain-specific samples, expressed as E(ϕ) = 1 m �m i=1 E(I ⊕ xiϕ). This embedding method integrates both sample-wise and LoRA-module-wise embeddings, facilitating the calculation of similarity between an individual sample and a LoRA module. For measuring the similarity between LoRA module ϕ and the input sequence x, following (Ni et al., 2021), we leverage the cosine similarity between the LoraRetriever embeddings: s(*x, ϕ, I*) = cos(E(I ⊕ x), E(ϕ)). To improve LoRA retrieval by the retriever and broaden its generalization to unseen LoRAs, we train the embedding model E through instruction fine-tuning on a small subset of tasks. To prevent the need to access new samples, we use previously employed samples for embedding LoRAs as our training data. Consider t distinct training tasks, represented as Ttrain = {T1, · · · *, T*t}. Following Ni et al. (2021), the training dataset D comprises paired samples (xi, x+ i ), where each xi is a sample from a task Ti ∈ T*train*, and a positive sample x+ i is randomly selected from the same task Ti. To complement each positive pair, we randomly select p negative pairs (xi, x− ij)p j=1, ensuring that x− ij is sourced from tasks outside of Ti, thereby x− ij /∈ Ti. The training process is achieved through a contrastive loss (Karpukhin et al., 2020; Izacard et al., 2021; Ni et al
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
349066f4-1f06-4502-beec-380f5db0a67e
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 4.1 Input-Aware Lora Retrieval comprises paired samples (xi, x+ i ), where each xi is a sample from a task Ti ∈ T*train*, and a positive sample x+ i is randomly selected from the same task Ti. To complement each positive pair, we randomly select p negative pairs (xi, x− ij)p j=1, ensuring that x− ij is sourced from tasks outside of Ti, thereby x− ij /∈ Ti. The training process is achieved through a contrastive loss (Karpukhin et al., 2020; Izacard et al., 2021; Ni et al., 2021) defined as follows: $$\mathcal{L}=\frac{e^{s(x_{i},x_{i}^{+},I)/\gamma}}{e^{s(x_{i},x_{i}^{+},I)/\gamma}+\sum_{j=1}^{p}e^{s(x_{i},x_{ij}^{-},I)/\gamma}},$$ where $\gamma$ is the softmax temperature. During the LoRA retrieval phase, the top-$k$ LoRa is are retrieved according to their similarity to the input $x$. This process can be formulated as follows: $$g(x_{i},\Phi):=\Phi_{i}=\text{TopK}\{s(\phi_{j},x_{i},I),\phi_{j}\in\Phi\}.$$
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fd1d5466-da43-4f2c-989e-54562aa59908
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 4.2 Lora Composition After retrieving the top-k LoRAs Φi for an input xi, we proceed to integrate these LoRAs into the LLM with parameter θ. This integration is achieved by applying two different LoRA composition strategies: the *Mixture of LoRAs* and the Fusion of Lo- RAs. 4.2.1 Mixture of LoRAs The mixture of LoRAs strategy involves the aggregation of the outputs of each submodule within the assembled LoRAs. Let us denote A = {A1, A2*, . . . , A*n} and B = {B1, B2, . . . , Bn} as the sets representing submodules within n Lo- RAs. For an input xi, the output derived from the mixture of LoRAs can be expressed as x′ i = 1 n �n j=1 BjAjxi, where x′ i denotes the output. This process signifies the integration of each LoRA module's output, effectively blending their contributions to form a unified output. 4.2.2 Fusion of LoRAs In contrast to the Mixture method, which combines the output of different LoRAs, fusing the parameters of these LoRAs presents an alternative composition strategy. Let the parameters of each LoRA ϕi be denoted by Θi. The parameter of the fused LoRA is then represented as Θfusion = 1 k �k j=1 Θj. This formu- lation allows the fused parameter to function akin to a single LoRA.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
86548dac-ccd0-493c-9d7e-c3693b1aba52
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 4.3 Batch Inference Of Multiple Loras Implementing batch inference in the presence of multiple LoRAs and diverse composition diagrams poses a significant technical challenge. To address this, we introduce a unique approach for batch inference. Our method involves processing a batch of samples denoted as X ∈ Rb×l×d, where b, l, and d denote the batch size, sequence length, and sample dimensionality, respectively. For each input xi and its retrieved LoRAs Φi within the same batch, we aggregate these LoRAs into a collective set denoted by ΦB. To ensure the uniqueness of ΦB, we eliminate duplicates, mindful of the possibility that retrieved LoRAs may overlap across different samples. The resulting set ΦB comprises p unique LoRAs, where p ≤ bk. For every sample xi, a p dimension mapping vector Mi is generated, which specifies the indices of its corresponding LoRAs within ΦB. The LoRA mapping vectors are combined into a matrix M ∈ Rb×p. The parameters of a submodule in LoRA can be denoted as A and B, and are concatenated within the batched LoRAs ΦB to obtain A ∈ Rp×r×d and B ∈ Rp×d×r. The batch inference process of the mixture of LoRAs can be formulated as follows: $$X^{\prime}={\bf M}\circ({\bf B}\circ{\bf A}\circ X),\qquad\qquad(3)$$ where we denote the batched output of a layer of multiple LoRA as X′ ∈ Rb×l×d and extend the symbol ◦ to denote potential broadcasting as Wen and Chaudhuri (2023). The batch inference process of LoRA fusion can be formulated as $$X^{\prime}=(\mathbf{M}\circ\mathbf{B})(\mathbf{M}\circ\mathbf{A})\circ X.\qquad\quad(4)$$ These strategies can be simply implemented by the einsum operation, and the PyTorch-style pseudocode is shown in Appendix.D.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
20be33f3-9c87-47af-b77c-c456c3e218f6
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5 Experiments This section outlines the evaluation framework for assessing different approaches in mixed-task scenarios. Furthermore, a comprehensive analysis of the proposed LoraRetriever framework is presented.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e542c84b-d86e-4ae6-9628-d342b23f3d9f
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework Base Model & LoRA Configuration. To test various methods in the mixed-task scenarios, we leverage Llama-2-{7b,13b} (Touvron et al., 2023) as the base models and train a range of LoRAs for a spectrum of tasks. We select a portion of the Flanv2 datasets (Wei et al., 2021) to train 48 LoRAs for a spectrum of tasks covering Natural Language Understanding (NLU) and Natural Language Generation (NLG). Following the categorization by Wei et al. (2021), these tasks can be grouped into 10 distinct task clusters. We train each LoRA according to the Alpaca (Taori et al., 2023) format and rank r, and the scaling hyperparameter α are set to 6 and 12, respectively. The details of the LoRAs training can be found in the Appendix. E. Mixed Task Evaluation Dataset. For constructing the mixed-task dataset, we randomly chose 50 samples from the test set for each task used in training 48 LoRAs, subsequently mixing and shuffling these samples to form a unified dataset with 6000 data entries. Further details about these datasets are available in the Appendix.E. Baseline Methods. We compared our method with the following baselines: (1) Mixture of Experts (Zhu et al., 2023; Zadouri et al., 2023; Liu et al., 2023a; Wang et al., 2022; Anonymous, 2024). (2) **SMEAR** (Muqeeth et al., 2023); (3) Adapter- Soup (Chronopoulou et al., 2023); (4) LoRAhub (Huang et al., 2023). Specifically, we implement three variants of MoE. A detailed description of the baseline models can be found in Appendix.A, with their implementations presented in Appendix.F. Implementation of LoraRetriever. To train the LoraRetriever, we continue to perform instruction fine-tuning based on Instructor-xl (Su et al., 2022). The training data consisted of only 40% of the tasks used to train task-specific LoRAs, with each task represented by 20 samples randomly selected from its respective LoRA training set. In this process, we categorized samples
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
874c483f-7675-4cf9-a282-ffcd76bb55be
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework Specifically, we implement three variants of MoE. A detailed description of the baseline models can be found in Appendix.A, with their implementations presented in Appendix.F. Implementation of LoraRetriever. To train the LoraRetriever, we continue to perform instruction fine-tuning based on Instructor-xl (Su et al., 2022). The training data consisted of only 40% of the tasks used to train task-specific LoRAs, with each task represented by 20 samples randomly selected from its respective LoRA training set. In this process, we categorized samples from the same LoRA as positive examples, while those from different Lo- Task Perfect Selection Selection Fusion Mixture MoE Top1 MoE Top3 MoE Soft SME- AR Adapter Soup LoRA Hub IID OOD IID OOD IID OOD w/ Llama2-7b Struct to Text*Rouge*−1 64.0 61.3 50.1 49.4 45.9 55.9 50.4 45.6 46.8 47.9 48.0 4.5 35.6 Struct to Text*Rouge*−2 39.6 37.0 26.6 25.7 23.5 30.0 26.4 21.9 22.9 23.8 24.2 1.1 17.7 Struct to TextRouge−l 57.0 54.5 43.9 43.6 40.3 49.5 44.0 39.8 40.7 41.7 42.4 4.5 31.6 TranslationBLEU 13.1 12.8 12.0 12.2 12.3 12.8 12.2 9.5 10.5 10.7 11.0 1.4 8.5 COMMONSENSE 62.5 55.5 46.0 51.0 48.0 61.5 50.0 54.5 52.0 51.5 50.0 46
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9420a4c9-9cc5-428b-ad7b-10a4e533259a
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework 8 40.7 41.7 42.4 4.5 31.6 TranslationBLEU 13.1 12.8 12.0 12.2 12.3 12.8 12.2 9.5 10.5 10.7 11.0 1.4 8.5 COMMONSENSE 62.5 55.5 46.0 51.0 48.0 61.5 50.0 54.5 52.0 51.5 50.0 46.0 17.5 SENTIMENT 90.0 89.5 89.0 79.0 78.5 89.5 90.5 70.0 75.0 74.5 74.0 73.5 0.5 READING Comp. 67.3 51.7 40.3 47.3 45.0 51.3 47.3 48.7 47.7 48.7 45.7 40.7 2.7 CLOSE-BOOK QA 45.0 40.0 43.0 41.0 37.5 45.0 48.5 40.5 38.5 40.0 32.0 31.5 1.0 COREFERENCE 52.0 50.0 46.0 47.0 53.0 63.0 49.0 61.0 59.0 57.0 58.0 43.0 1.0 READ. COOMP. W/ COM 69.0 69.0 30.0 35.0 19.0 46.0 40.0 31.0 29.0 29.0 23.0 14.0 3.0 PARAPHRASE 65.5 58.0 45.5 45.5 44.0 56.5 45.5 42.0 38.5 36.0 34.5 46.5 1.0 NLI 72.3 70.0 60.6 51.4 53.8 67.9 64.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fe2c810c-78a0-4c45-a707-d7a3e2b69499
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework 0 35.0 19.0 46.0 40.0 31.0 29.0 29.0 23.0 14.0 3.0 PARAPHRASE 65.5 58.0 45.5 45.5 44.0 56.5 45.5 42.0 38.5 36.0 34.5 46.5 1.0 NLI 72.3 70.0 60.6 51.4 53.8 67.9 64.3 50.3 49.6 48.3 50.8 62.4 10.5 w/ Llama2-13b Struct to Text*Rouge*−1 65.4 62.6 49.4 52.7 49.7 57.7 52.1 46.8 47.0 48.5 48.3 7.1 39.3 Struct to Text*Rouge*−2 40.8 38.2 25.8 29.2 26.8 32.6 28.1 24.5 25.1 25.7 25.2 2.5 20.7 Struct to TextRouge−l 58.7 56.0 42.9 45.9 43.2 50.8 45.4 41.1 41.9 42.7 42.2 6.4 34.6 TranslationBLEU 12.9 12.9 12.7 14.6 14.1 14.6 14.1 11.8 12.4 11.9 12.4 0.8 10.2 COMMONSENSE 69.5 59.0 47.5 61.0 56.0 64.0 60.5 65.0 66.0 64.0 61.0 17.5 34.0 SENTIMENT 90.0 90.5 91.0 87.0 83.5 91.5 91.5 90.0 89.5 90.0 89.0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d547c9ae-c469-45ef-bddc-c983ab812643
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework 8 12.4 11.9 12.4 0.8 10.2 COMMONSENSE 69.5 59.0 47.5 61.0 56.0 64.0 60.5 65.0 66.0 64.0 61.0 17.5 34.0 SENTIMENT 90.0 90.5 91.0 87.0 83.5 91.5 91.5 90.0 89.5 90.0 89.0 79.5 11.0 READING Comp. 76.0 60.3 48.0 56.7 49.3 60.3 51.3 53.7 53.3 52.3 51.3 48.7 3.3 CLOSE-BOOK QA 64.0 60.0 53.0 62.0 58.0 63.0 61.0 59.5 57.5 58.5 57.5 34.5 6.5 COREFERENCE 74.0 75.0 65.0 55.0 59.0 76.0 64.0 61.0 62.0 56.0 57.0 55.0 10.0 READ. COOMP. W/ COM 82.0 80.0 33.0 57.0 49.0 78.0 58.0 51.0 48.0 49.0 49.0 13.0 14.0 PARAPHRASE 77.5 68.0 52.5 55.5 45.5 71.0 55.5 50.0 52.5 47.5 52.0 64.0 2.5 NLI 82.4 78.9 70.2 69.8 66.4 78.1 75.7 67.7 71.0 67.4 66.6 67.5 14.9 Table 1: We report the average performance of each task cluster. The full results of each task are shown in Appendix.C. "I
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
97043c42-cfda-4816-8d55-e95ca3894960
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework .5 55.5 45.5 71.0 55.5 50.0 52.5 47.5 52.0 64.0 2.5 NLI 82.4 78.9 70.2 69.8 66.4 78.1 75.7 67.7 71.0 67.4 66.6 67.5 14.9 Table 1: We report the average performance of each task cluster. The full results of each task are shown in Appendix.C. "IID" signifies that LoraRetriever can access any LoRA for every test sample, encompassing the LoRA specific to the sample's task. "OOD" indicates that for each test sample, we mask the LoRA associated with its specific task during the retrieval phase. Consequently, no sample can access its ideal LoRA, allowing us to assess the LoraRetriever's cross-task generalization capability. The performance of perfectly selected corresponding LoRA for each sample is colored in gray. We have bolded the best performance of each task and underlined the best performance in the "OOD" setting. | Method | Top 1 | Top 3 | Top 5 | Top 8 | |---------------------------|---------|---------|---------|---------| | all-mpnet-base-v2 | 58.40 | 78.26 | 84.77 | 90.24 | | all-MiniLM-L6-v2 | 51.73 | 73.11 | 80.54 | 87.18 | | msmarco-distilbert-cos-v5 | 45.84 | 66.01 | 75.14 | 82.67 | | gtr-t5-xl | 53.19 | 69.72 | 77.41 | 83.59 | | LoraRetriever |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7304cc36-849f-4260-9ef6-8219bb3f8bee
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework 73.11 | 80.54 | 87.18 | | msmarco-distilbert-cos-v5 | 45.84 | 66.01 | 75.14 | 82.67 | | gtr-t5-xl | 53.19 | 69.72 | 77.41 | 83.59 | | LoraRetriever | | | | | | | 0% | | | | | 60.80 | 79.29 | 85.57 | 91.58 | | | LoraRetriever | | | | | | | 40% | | | | | 63.16 | 89.09 | 95.45 | 98.97 | | | LoraRetriever | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
37b3c709-29f9-4df3-84fd-af3c2de2e6d9
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework | | | 63.16 | 89.09 | 95.45 | 98.97 | | | LoraRetriever | | | | | | | 100% | | | | | 74.08 | 97.37 | 99.15 | 99.82 | | RAs were considered negative examples. Additionally, the three distinct strategies for LoRA composition are: (1) **Selection**, which involves choosing the highest-ranked (top-1) retrieved LoRA and applying it in a singular LoRA manner, and can be viewed as a variant for Mixture and Fusion methods; (2) **Mixture**, which averaging the outputs of each submodule from the top-k retrieved LoRAs; and (3) **Fusion**, a method that averages the parameters of the top-k retrieved LoRAs. Throughout our experiments, k = 3 is established as the default setting. Metrics. Following Wei et al. (2021), we assess the performance on the "Struct to Text" task using | Methods | 0% | 40% ( | |-----------|--------|----------------| | ∆ | | | | %) | 100% ( | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6db137c6-f6ad-4d6f-84be-b5640dd91e6f
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.1 Evaluation Framework Wei et al. (2021), we assess the performance on the "Struct to Text" task using | Methods | 0% | 40% ( | |-----------|--------|----------------| | ∆ | | | | %) | 100% ( | | | ∆ | | | | %) | | | | Selection | 57.99 | 62.42 (+7.64%) | | Fusion | 51.50 | 51.50 (+0.00%) | | Mixture | 62.24 | 63.54 (+2.09%) | Rouge-{1, 2, L} and on the "Translation" tasks using BLEU. Additionally, for the NLU tasks, we evaluate the exact match accuracy of each method.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7ef38541-df6e-4de8-8a86-8f43a589fc07
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.2 Main Results The main results of the mixed-task evaluation are shown in Tab.1. We present the mean performance across each task cluster and additionally evaluate the LoraRetriever's effectiveness in an out-ofdomain (OOD) setting. In the OOD configuration, we mask the corresponding LoRA for each sample, thereby inhibiting LoraRetriever from retrieving the ideal LoRA for each sample. In this way, we can assess the cross-task generalization capability of LoraRetriever. From the results, we have the following observations: (1) The proposed framework, LoRARetriever, which performs input-aware LoRA retrieval and composition, markedly surpasses other baselines focusing on specific downstream tasks. (2) Among them, the performance of Mixture and Selection is similar in IID scenarios, while Fusion's performance is weaker compared to the other two methods. The reasons are as follows: (i) In the IID setting, LoraRetriever can achieve strong top-1 selection, leading to similar results between Selection and the Mixture; (ii) As different tasks are inherently heterogeneous, it is inferior to directly average top-k LoRA parameters in the Fusion. (3) In the OOD setting, the Mixture exceeds the performance of the Selection, and the performance of Fusion is similar to that of the Selection. The reasons can be as follows: (i) The selection cannot retrieve the associated LoRA for the input sample in the OOD setting, leading to a significant performance drop. (ii) The Mixture can fully leverage the capabilities of similar tasks to address OOD tasks, alleviating the performance drop. (4) The performance of the MoE and SMEAR methods is weaker than that of LoraRetriever. The limitation stems from the restricted capacity of these methods for adaptation and generalization to dynamically changing environments populated with diverse LoRAs, thereby diminishing their efficacy in mixed-task scenarios. (5) In mixed-task scenarios, although AdapterSoup uniformly searches for appropriate LoRAs for downstream tasks, the retrieved LoRAs fall short in personalization for each request, hindering their effectiveness for each specific task. (6) LoRAhub proves to be entirely ineffective in the mix-task scenario. First, the fusion of LoRAhub depends on randomly selected LoRAs, which may not be relevant. Second, the presence of heterogeneous tasks introduces conflicting parameter optimization directions, resulting in the total breakdown
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bb51f959-078d-4f2f-836a-f2beb544977c
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.2 Main Results ization to dynamically changing environments populated with diverse LoRAs, thereby diminishing their efficacy in mixed-task scenarios. (5) In mixed-task scenarios, although AdapterSoup uniformly searches for appropriate LoRAs for downstream tasks, the retrieved LoRAs fall short in personalization for each request, hindering their effectiveness for each specific task. (6) LoRAhub proves to be entirely ineffective in the mix-task scenario. First, the fusion of LoRAhub depends on randomly selected LoRAs, which may not be relevant. Second, the presence of heterogeneous tasks introduces conflicting parameter optimization directions, resulting in the total breakdown of parameter fusion.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6faf14c5-9a31-4ea2-b418-db3ea786156c
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.3 Analysis Performance of Retriever. We compare LoraRetriever with some popular off-the-shelf sentence embedding models in Huggingface and adopt the model nomenclature following Wolf et al. (2020). To analyze the effect of the percentage of the tasks for training LoraRetriever, we trained three variants of LoraRetriever with different percentages. Tab.2 shows the performance of different retrieval models for retrieving relevant LoRAs. It is shown that guiding sentence embedding models with specific prompts leads to a performance improvement in retrieval compared to common retrieval models. After instruction fine-tuning, the retriever significantly enhanced the ability to retrieve corresponding LoRA based on the input. Conducting instruction fine-tuning on 40% of the tasks resulted in a 2.36% increase in top-1 accuracy and a 9.80% increase in top-3 accuracy. Training across all tasks achieved the largest improvement. To demonstrate the generalizability of the proposed framework when dealing with unseen LoRAs, we used a retriever trained on 40% of the tasks in the main experiment to simulate the scenario of dynamic updates to the LoRA pool that might occur while providing services with LoraRetriever. Tab.3 shows the performance of LoraRetriever trained with different proportions of tasks for LoRA retrieval. It is observed that for the selection and mixture methods, training under 40% of the tasks has already seen significant improvements. The best performance is achieved when trained under all tasks, but the improvement compared to 40% is relatively small, which to some extent reflects the good generalization ability of instruction finetuning of LoraRetriever. Fig.3 illustrates the similarity between task embeddings for different tasks through a heatmap, where tasks from the same task cluster are grouped in square brackets. It is shown that task embeddings within the same domain are more similar, indicating that the LoraRetriever embeddings can serve as task embeddings to characterize the similarities between different tasks and be applied for LoRA retrieval. Impact of the number of Retrieved LoRA. Fig.4 (a) illustrates the performance of the number of retrieved LoRAs on the mean accuracy of the NLU tasks. The results indicate that as the number of retrieved LoRAs increases, the performance of the Mixture initially improves slightly but then stabilizes. In contrast, the Fusion shows
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e9a5dad4-85c3-4e3d-b485-b0a38ff1fa0c
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.3 Analysis brackets. It is shown that task embeddings within the same domain are more similar, indicating that the LoraRetriever embeddings can serve as task embeddings to characterize the similarities between different tasks and be applied for LoRA retrieval. Impact of the number of Retrieved LoRA. Fig.4 (a) illustrates the performance of the number of retrieved LoRAs on the mean accuracy of the NLU tasks. The results indicate that as the number of retrieved LoRAs increases, the performance of the Mixture initially improves slightly but then stabilizes. In contrast, the Fusion shows a continuous decline in performance with an increasing number of LoRAs, which once again demonstrates that under the conditions of heterogeneous tasks, the simple averaging of parameters can compromise the original capabilities of the LoRAs. In particular, in the OOD setting, the performance of the Mixture improves significantly as the number of LoRAs increases, illustrating that in the absence of an ideal LoRA choice for a request, leveraging the capabilities of multiple LoRAs of similar tasks can effectively achieve cross-task generalization. Effectiveness of Batch Inference Strategy. To evaluate the efficiency of our proposed batch inference strategy, we compared the throughput of different batch sizes. The throughput is defined as the number of both input and output tokens per second across all requests in the mixed-task benchmark. We specifically compared the computational efficiency with that of a single LoRA. Our evaluation encompassed the entire evaluation dataset, and we limited the generation to the first produced token to mitigate discrepancies caused by varying generation lengths across different methods. These experiments were conducted on an NVIDIA A100 GPU (80GB) utilizing bfloat16 precision. As illustrated in Fig.4 (b), our batch inference strategy markedly improves the throughput of the framework, with a slight throughput reduction compared to a single LoRA. Notably, the Fusion outperforms the mixture strategy in throughput efficiency, attributed to its parameter averaging approach that circumvents the need for parallel computation across multiple LoRAs. Showcases. We showcase the framework's ability to adeptly integrate multiple LoRAs for synergistic problem-solving, as evidenced in Fig.5. We manually craft three problems in Fig.5, which cannot retrieve any single LoRA to solve these problems directly, necessitating the cooperation of existing LoRAs. Specifically, the first example requires LoraRetriever to integrate
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5006647a-d9a1-42c5-9281-d13fb09cd08b
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 5.3 Analysis slight throughput reduction compared to a single LoRA. Notably, the Fusion outperforms the mixture strategy in throughput efficiency, attributed to its parameter averaging approach that circumvents the need for parallel computation across multiple LoRAs. Showcases. We showcase the framework's ability to adeptly integrate multiple LoRAs for synergistic problem-solving, as evidenced in Fig.5. We manually craft three problems in Fig.5, which cannot retrieve any single LoRA to solve these problems directly, necessitating the cooperation of existing LoRAs. Specifically, the first example requires LoraRetriever to integrate NLI and translation tasks' capabilities. The retrieved LoRA wmt16-tren is utilized for comprehending Turkish, while glueqqp is applied to NLI tasks. In the second scenario, LoRAs are integrated for translating from German to French. Although there is no direct LoRA for German-to-French translation, the combined use of wmt16-deen for German-to-English and wmt14-enfr for English-to-French enables an effective German-to-French translation. The third scenario illustrates the fusion of distinct capabilities by combining Romanian translation with text generation: leveraging the wmt16-roen LoRA for Romanian comprehension and the common-gen LoRA for generating text, LoraRetriever successfully merges these diverse functionalities. This demonstration emphasizes the framework's substantial ability to blend distinct LoRA capabilities, anticipating further exploration of capability fusion of LoRAs as a future direction.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ac4327c5-c997-441d-9dc0-7390d1c57a08
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## 6 Conclusion This paper investigates a new problem of serving multiple LoRAs with a dynamically updated LoRA pool for downstream heterogeneous requests. To this end, we introduce a framework named LoraRetriver to identify and retrieve the appropriate LoRAs based on a specific input. Subsequently, we focus on the composition of these retrieved Lo- RAs to ensure a tailored and practical application in real-world situations. We also propose an efficient batch inference strategy to accommodate batched requests. Subsequent experiments have also demonstrated the effectiveness of our proposed LoraRetriever.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
145ec4a3-e740-4298-b734-68c5071cd440
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## Limitation While promising, there are still some drawbacks of LoraRetriever. (1) User data privacy issues. When users upload LoRA, we need to use a small amount of training data (10-20 pieces) to represent the distribution of the LoRA model. In privacysensitive scenarios, representation with data may not be feasible. Aligning LoRA parameters and sample distributions in the embedding space in a manner that respects data privacy presents a worthwhile direction for future exploration. (2) The proposed LoraRetriever framework is only suitable for multi-LoRA collaboration under the same model architecture. However, in reality, the model architecture chosen by the users themselves and the PEFT method are not necessarily the same, which is worth further research on how to design the corresponding collaborative mechanism for such scenarios.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
38184db9-a93d-46d3-bcb8-d13bdb3b838e
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## A Details Of Baseline Methods (1) **Mixture of Experts** (Zhu et al., 2023; Zadouri et al., 2023; Liu et al., 2023a; Wang et al., 2022; Anonymous, 2024). Many works have considered coordinating different adapters through MoE, and here we explored three distinct variants: one employing a soft mixture of experts and the other utilizing discrete routing (top1 and top3). Detailed information on the training aspects of MoE methods is provided in the Appendix.F. (2) SMEAR (Muqeeth et al., 2023) introduces the concept of adaptive routing by performing a weighted average of different adapters' parameters to utilize various experts effectively. For the MoE and SMEAR baselines, challenges arise in scaling due to training confined to a limited set of LoRAs. Consequently, we strategically selected a dedicated LoRA expert for each domain to specialize in router training. (3) AdapterSoup (Chronopoulou et al., 2023) uniformly selects the corresponding LoRAs for the entire downstream task, which lacks the ability to provide personalized service for diverse requests. (4) **LoRAhub** (Huang et al., 2023) enables blackbox optimization to learn the weights of various LoRA parameters, thereby facilitating weighted parameter averaging for specific downstream tasks. In our implementation, we conformed to the default setting, which entails randomly selecting 20 LoRAs from the available LoRA pool and performing weighted parameter averaging. For the MoE, SMEAR, and LoRAhub approaches, we selected 20 data samples from the training datasets of all tasks to serve as their training data.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2a0ffa38-2a02-4b73-8538-b38f05917ffb
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## B Main Difference Between Loraretriever And Baseline Methods C Full Results In Tab.4, we show the full results of the mixed-task scenario of all tasks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
492c20fb-ba3c-46c9-8c57-fac77a00f1e8
# Loraretriever: Input-Aware Lora Retrieval And Composition For Mixed Tasks In The Wild ## D Pytorch-Style Pseudocode For Batch Inference The batch inference process can be easily achieved through a few lines of einsum operation. We show the PyTorch style pseudocode in Alg.1.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5a6dd4fc-facf-4305-82f5-f870826d13ba
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) res=torch.einsum('blr,bdr->bld',mid,FB) # LoRA mixture computation mid=torch.einsum('bld,prd->blpr',X,A) mid=torch.einsum('blpr,pdr->blpd',mid,B) res=torch.einsum('blpd,bp->bld',mid,M)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fae2c3e5-3999-42aa-9c62-4641e71a4f85
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets We leverage a subset of flan-v2 datasets (Wei et al., 2021) as shown in Fig.7 for LoRA expert training and mixed-task dataset generation. We summarize the details of the used datasets as follows: Struct-to-Text Conversion: This task evaluates the capability to generate natural language descriptions from structured data inputs. We use the following datasets: (1) CommonGen; (2) DART; (3) E2ENLG; (4) WebNLG; Translation: Translation involves converting text from one language to another, maintaining the original meaning and nuances. We use the following datasets: (1) En-Fr from WMT'14; En- De, En-Tr, En-Ru, En-Fi, En-Ro from WMT'16; (3) En-Es from Paracrawl. Commonsense Reasoning: This involves assessing the ability to apply physical or scientific principles alongside common sense in reasoning tasks. We use the following datasets: (1) COPA, (2) HellaSwag, (3) PiQA, and (4) StoryCloze. Sentiment Analysis: A fundamental task in natural language processing (NLP) that determines the sentiment polarity (positive or negative) of a given text. We use the following datasets: (1) IMDB, (2) Sentiment140, (3) SST-2, and (4) Yelp. Closed-Book Question Answering: This task challenges models to answer questions about general knowledge without direct access to external Task / Llama-2-7b Perfect Selection Selection Fusion Mixture MoE Top1 MoE Top3 MoE Soft SME- AR Adapter Soup LoRA Hub IID OOD IID OOD IID OOD Struct to Text WebNLG Rouge-1 71.2 67.0 53.9 49.4 45.4 57.8 53.9 45.1 47.6 49.1 51.1 3.9 32.5 WebNLG Rouge-2 50.6 44.5 30.0 25.9 24.1 33.5 29.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
248740b0-df4e-4a29-a1b2-48d6e02e7ec1
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets ME- AR Adapter Soup LoRA Hub IID OOD IID OOD IID OOD Struct to Text WebNLG Rouge-1 71.2 67.0 53.9 49.4 45.4 57.8 53.9 45.1 47.6 49.1 51.1 3.9 32.5 WebNLG Rouge-2 50.6 44.5 30.0 25.9 24.1 33.5 29.4 22.6 25.8 26.1 27.9 0.9 17.3 WebNLG Rouge-l 64.4 60.9 49.1 45.5 41.0 52.3 49.6 40.0 41.9 43.3 45.4 3.9 31.1 DART Rouge-1 71.7 67.9 58.4 56.3 53.4 63.2 60.0 55.4 56.3 56.9 60.0 3.3 40.0 DART Rouge-2 49.1 45.8 34.9 32.3 30.6 36.6 35.4 30.3 31.0 30.8 33.0 1.3 20.1 DART Rouge-l 64.6 61.1 52.4 50.3 47.9 56.3 52.4 49.7 50.8 50.2 54.8 3.3 35.2 E2ENLG Rouge-1 66.1 65.8 59.3 62.2 57.2 66.0 58.7 52.9 54.0 55.3 53.2 4.2 50.1 E2ENLG Rouge-2 40.0 39.4 34.1 34.7 32.0 38.8 32.1 26.9 27.6 28.8 27.5 2.4 26.3 E2ENLG Rouge-l
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9e64f32d-7d61-4afe-ba79-85cf028509e2
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets E2ENLG Rouge-1 66.1 65.8 59.3 62.2 57.2 66.0 58.7 52.9 54.0 55.3 53.2 4.2 50.1 E2ENLG Rouge-2 40.0 39.4 34.1 34.7 32.0 38.8 32.1 26.9 27.6 28.8 27.5 2.4 26.3 E2ENLG Rouge-l 56.7 55.7 50.2 52.7 49.1 56.9 49.0 45.1 45.0 47.0 45.1 4.2 42.2 CommonGen Rouge-1 46.9 44.7 29.0 29.9 27.7 36.5 29.0 29.0 29.3 30.1 27.6 6.6 19.8 CommonGen Rouge-2 18.8 18.3 7.3 9.9 7.2 11.1 8.6 7.7 7.1 9.3 8.4 0.0 6.9 CommonGen Rouge-l 42.5 40.5 24.0 25.8 23.3 32.7 24.8 24.4 25.1 26.3 24.3 6.6 18.0 Translation Paracrawl-enes 24.3 24.2 20.3 22.9 22.3 22.8 22.1 18.0 18.8 19.5 21.6 4.5 16.4 WMT'16-tren 3.2 3.1 2.6 3.5 3.3 3.7 2.6 3.5 3.2 3.4 3.2 0.0 2.0 WMT'16-ruen 10.8 10.4 9.8 9.2 9.3 11.0 10.8 6
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
05a6ad97-caac-43f2-b182-98b5655e5eb5
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets .8 22.1 18.0 18.8 19.5 21.6 4.5 16.4 WMT'16-tren 3.2 3.1 2.6 3.5 3.3 3.7 2.6 3.5 3.2 3.4 3.2 0.0 2.0 WMT'16-ruen 10.8 10.4 9.8 9.2 9.3 11.0 10.8 6.2 7.8 8.3 7.3 0.0 4.8 WMT'16-deen 18.9 18.7 20.3 17.9 18.8 18.8 18.7 11.6 14.0 14.7 16.6 1.1 11.4 WMT'16-fien 6.5 6.5 7.0 7.2 7.1 7.3 7.8 6.2 6.2 6.1 6.5 0.7 4.3 WMT'16-roen 13.9 14.0 12.3 12.8 13.3 13.1 12.2 9.8 10.7 10.1 10.3 0.3 8.0 WMT'14-enfr 16.5 16.1 16.9 17.7 18.0 17.8 18.0 15.9 17.3 17.1 16.4 3.5 15.2 WMT'16-csen 10.7 9.4 7.0 6.1 6.2 8.3 5.8 4.7 6.3 6.3 6.3 0.8 6.1 COMMONSENSE StoryCloze 72.0 62.0 42.0 72.0 68.0 84.0 58.0 74.0 70.0 70.0 68.0 62.0 48.0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
699f364b-777e-41e4-a48a-2284a334dd77
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets 5 15.2 WMT'16-csen 10.7 9.4 7.0 6.1 6.2 8.3 5.8 4.7 6.3 6.3 6.3 0.8 6.1 COMMONSENSE StoryCloze 72.0 62.0 42.0 72.0 68.0 84.0 58.0 74.0 70.0 70.0 68.0 62.0 48.0 PIQA 46.0 46.0 32.0 34.0 36.0 38.0 34.0 40.0 38.0 38.0 36.0 38.0 0.0 COPA 86.0 74.0 68.0 78.0 70.0 80.0 68.0 72.0 70.0 72.0 70.0 56.0 22.0 HellaSwag 46.0 40.0 42.0 20.0 18.0 44.0 40.0 32.0 30.0 26.0 26.0 28.0 0.0 sentiment SST-2 98.0 98.0 96.0 74.0 78.0 96.0 94.0 56.0 68.0 66.0 66.0 74.0 0.0 Yelp 98.0 94.0 94.0 96.0 96.0 98.0 98.0 86.0 90.0 86.0 84.0 80.0 0.0 IMDB 96.0 96.0 96.0 92.0 82.0 96.0 96.0 76.0 80.0 80.0 84.0 80.0 0.0 sentiment140 68.0 70.0 70.0 54.0 58.0 68.0 74.0 62.0 62.0 66.0 62
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7a0f92c1-e89d-44d2-ad6d-0bf048fc5494
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets 98.0 86.0 90.0 86.0 84.0 80.0 0.0 IMDB 96.0 96.0 96.0 92.0 82.0 96.0 96.0 76.0 80.0 80.0 84.0 80.0 0.0 sentiment140 68.0 70.0 70.0 54.0 58.0 68.0 74.0 62.0 62.0 66.0 62.0 60.0 2.0 READING Comp. MultiRC 68.0 52.0 38.0 44.0 44.0 48.0 44.0 54.0 52.0 50.0 48.0 40.0 6.0 SQuADv2 62.0 56.0 12.0 30.0 20.0 22.0 16.0 24.0 24.0 26.0 22.0 16.0 0.0 SQuADv1 68.0 66.0 68.0 64.0 64.0 62.0 68.0 68.0 70.0 66.0 66.0 54.0 4.0 OBQA 82.0 68.0 58.0 64.0 60.0 78.0 66.0 62.0 64.0 66.0 60.0 40.0 0.0 BoolQ 84.0 60.0 60.0 68.0 70.0 80.0 76.0 74.0 68.0 76.0 70.0 72.0 6.0 drop 40.0 8.0 6.0 14.0 12.0 18.0 14.0 10.0 8.0 8.0 8.0 22.0 0.0 CLOSE-BOOK QA NQ 18.0 16.0 10.0 16.0 14.0 16.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
96464b1f-03d6-4e06-86c6-9cfb1f7cc891
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets .0 68.0 70.0 80.0 76.0 74.0 68.0 76.0 70.0 72.0 6.0 drop 40.0 8.0 6.0 14.0 12.0 18.0 14.0 10.0 8.0 8.0 8.0 22.0 0.0 CLOSE-BOOK QA NQ 18.0 16.0 10.0 16.0 14.0 16.0 10.0 12.0 12.0 12.0 4.0 12.0 0.0 ARC-e 50.0 56.0 70.0 54.0 56.0 66.0 82.0 58.0 58.0 60.0 58.0 48.0 0.0 ARC-c 46.0 42.0 46.0 34.0 34.0 50.0 46.0 46.0 42.0 42.0 42.0 24.0 0.0 TriviaQa 66.0 46.0 46.0 60.0 46.0 48.0 56.0 46.0 42.0 46.0 24.0 42.0 4.0 COREFERENCE DPR 54.0 50.0 50.0 56.0 60.0 68.0 56.0 64.0 60.0 62.0 62.0 46.0 2.0 WSC 50.0 50.0 42.0 38.0 46.0 58.0 42.0 58.0 58.0 52.0 54.0 40.0 0.0 READ. COOMP. W/ COMMONSENSE CosmosQa 68.0 68.0 34.0 46.0 32.0 50.0 46.0 44.0 46.0 44.0 38.0 14.0 6.0 record
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9e210cc5-bda7-483f-9695-3d0c2e495e76
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets 0 WSC 50.0 50.0 42.0 38.0 46.0 58.0 42.0 58.0 58.0 52.0 54.0 40.0 0.0 READ. COOMP. W/ COMMONSENSE CosmosQa 68.0 68.0 34.0 46.0 32.0 50.0 46.0 44.0 46.0 44.0 38.0 14.0 6.0 record 70.0 70.0 26.0 24.0 6.0 42.0 34.0 18.0 12.0 14.0 8.0 14.0 0.0 PARAPHRASE Paws Wiki 90.0 64.0 40.0 44.0 42.0 56.0 46.0 56.0 50.0 48.0 54.0 60.0 2.0 QQP 74.0 74.0 68.0 66.0 60.0 80.0 58.0 50.0 40.0 36.0 28.0 54.0 0.0 MRPC 60.0 58.0 58.0 60.0 62.0 60.0 58.0 42.0 44.0 40.0 42.0 60.0 2.0 STSB 38.0 36.0 16.0 12.0 12.0 30.0 20.0 20.0 20.0 20.0 14.0 12.0 0.0 NLI CB 88.9 80.0 62.2 77.8 57.8 86.7 66.7 68.9 64.4 68.9 62.2 55.6 13.3 WNLI 70.0 68.0 46.0 44.0 50.0 60.0 54.0 56.0 56.0 42.0 44.0 52
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3d7f7054-46ae-439d-9eee-ab4b4843892a
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets .0 20.0 20.0 14.0 12.0 0.0 NLI CB 88.9 80.0 62.2 77.8 57.8 86.7 66.7 68.9 64.4 68.9 62.2 55.6 13.3 WNLI 70.0 68.0 46.0 44.0 50.0 60.0 54.0 56.0 56.0 42.0 44.0 52.0 0.0 ANLI-r1 50.0 50.0 50.0 40.0 42.0 40.0 42.0 40.0 40.0 36.0 38.0 38.0 24.0 ANLI-r2 46.0 46.0 46.0 32.0 36.0 46.0 46.0 40.0 36.0 38.0 32.0 46.0 20.0 ANLI-r3 46.0 42.0 38.0 38.0 40.0 44.0 50.0 28.0 32.0 34.0 38.0 40.0 24.0 MNLI-m 88.0 84.0 88.0 62.0 66.0 80.0 88.0 48.0 54.0 50.0 56.0 76.0 0.0 MNLI-mm 92.0 90.0 94.0 64.0 82.0 88.0 90.0 48.0 48.0 50.0 60.0 84.0 2.0 SNLI 96.0 84.0 84.0 56.0 58.0 90.0 92.0 54.0 52.0 54.0 54.0 82.0 0.0 QNLI 94.0 94.0 26.0 46.0 48.0 74.0 38.0 56.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b50dec43-a8d5-4e3e-ab6f-f7e74671799a
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets .0 82.0 88.0 90.0 48.0 48.0 50.0 60.0 84.0 2.0 SNLI 96.0 84.0 84.0 56.0 58.0 90.0 92.0 54.0 52.0 54.0 54.0 82.0 0.0 QNLI 94.0 94.0 26.0 46.0 48.0 74.0 38.0 56.0 56.0 54.0 60.0 70.0 0.0 RTE 52.0 62.0 72.0 54.0 58.0 70.0 76.0 64.0 58.0 56.0 64.0 80.0 22.0 Task / Llama-2-13b Perfect Selection Selection Fusion Mixture MoE Top1 MoE Top3 MoE Soft SME- AR Adapter Soup LoRA Hub IID OOD IID OOD IID OOD Struct to Text WebNLG Rouge-1 72.6 68.5 51.9 55.2 51.3 59.7 53.7 47.0 47.8 48.3 49.7 6.6 34.2 WebNLG Rouge-2 51.4 47.5 28.6 30.1 27.4 35.6 29.1 25.5 26.4 27.0 26.3 3.2 16.8 WebNLG Rouge-l 66.0 62.4 48.3 49.4 46.2 55.1 49.0 42.5 44.3 43.5 44.3 6.3 32.4 DART Rouge-1 74.0 67.0 57.0 60.4 58.7 62.6 60.6 57.9 57.0 58.7 58.9 12.5
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
49c496f1-57a2-48a1-b1dc-477a10800419
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets .0 26.3 3.2 16.8 WebNLG Rouge-l 66.0 62.4 48.3 49.4 46.2 55.1 49.0 42.5 44.3 43.5 44.3 6.3 32.4 DART Rouge-1 74.0 67.0 57.0 60.4 58.7 62.6 60.6 57.9 57.0 58.7 58.9 12.5 43.3 DART Rouge-2 54.6 45.9 33.6 37.4 35.2 38.9 37.3 32.6 33.2 34.1 34.3 5.9 25.5 DART Rouge-l 67.7 61.2 50.0 54.0 52.2 55.0 53.4 50.9 50.9 51.8 51.5 11.8 38.1 E2ENLG Rouge-1 66.4 66.1 59.2 65.6 61.7 66.7 63.9 52.8 53.9 55.5 55.1 2.5 58.6 E2ENLG Rouge-2 39.6 39.3 32.8 36.6 33.5 39.0 36.4 27.8 28.4 28.3 28.8 0.9 31.6 E2ENLG Rouge-l 56.7 56.4 48.9 53.4 49.7 56.2 53.7 43.0 44.5 45.1 45.4 2.2 48.2 CommonGen Rouge-1 48.5 48.9 29.3 29.5 27.0 41.7 30.0 29.5 29.3 31.5 29.5 6.9 21.2 CommonGen Rouge-2 17.7 20.3 8.2 12.6 11.3
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c3f5d924-f8e2-4592-bf84-4e461468ed77
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets 48.9 53.4 49.7 56.2 53.7 43.0 44.5 45.1 45.4 2.2 48.2 CommonGen Rouge-1 48.5 48.9 29.3 29.5 27.0 41.7 30.0 29.5 29.3 31.5 29.5 6.9 21.2 CommonGen Rouge-2 17.7 20.3 8.2 12.6 11.3 17.0 9.5 12.2 12.3 13.5 11.5 0.0 8.8 CommonGen Rouge-l 44.3 44.1 24.4 26.7 24.6 36.7 25.3 28.0 27.9 30.3 27.8 5.4 19.7 Translation Paracrawl-enes 24.4 25.4 23.1 28.3 27.2 27.4 25.8 21.8 21.5 20.0 24.2 4.0 19.0 WMT'16-tren 2.9 2.4 1.2 3.7 3.2 3.4 2.7 3.0 3.3 2.7 3.3 0.0 1.9 WMT'16-ruen 11.8 11.5 10.3 11.8 11.5 10.5 12.0 8.8 9.9 9.3 8.8 0.0 8.1 WMT'16-deen 19.9 19.9 20.7 20.2 20.7 20.2 20.2 16.5 18.6 18.1 18.3 2.2 15.1 WMT'16-fien 7.3 6.8 5.1 9.8 7.3 8.7 7.8 8.0 8.3 8.4 8.3 0.0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4abd2576-964d-4d88-8c02-3a5ec77c14e8
# X: (b,l,d), M: (b,p) # A: (p,r,d), B: (p,d,r) # LoRA fusion computation FA=torch.einsum('bp,prd->brd',M,A) FB=torch.einsum('bp,pdr->bdr',M,B) mid=torch.einsum('bld,brd->blr',X,FA) ## E Details Of Training And Evaluation Datasets 3 8.8 0.0 8.1 WMT'16-deen 19.9 19.9 20.7 20.2 20.7 20.2 20.2 16.5 18.6 18.1 18.3 2.2 15.1 WMT'16-fien 7.3 6.8 5.1 9.8 7.3 8.7 7.8 8.0 8.3 8.4 8.3 0.0 6.3 WMT'16-roen 14.0 13.9 10.9 11.6 11.1 17.4 13.3 7.9 9.1 9.3 9.3 0.0 7.4 WMT'14-enfr 16.6 17.1 18.4 20.7 21.1 18.7 20.9 17.4 17.5 18.7 17.2 0.4 15.1 WMT'16-csen 6.6 6.2 11.6 11.1 10.5 10.3 10.1 10.6 10.7 9.0 9.4 0.0 8.8 COMMONSENSE StoryCloze 96.0 80.0 56.0 90.0 76.0 80.0 76.0 96.0 98.0 94.0 92.0 18.0 64.0 PIQA 48.0 52.0 30.0 46.0 46.0 46.0 46.0 42.0 40.0 36.0 38.0 14.0 10.0 COPA 76.0 74.0 68.0 74.0 74.0 78.0 76.0 72.0 80.0 80.0 76.0 22.0 60.0 HellaSwag 58.0 30.0 36.0 34.0 28.0 52
{ "creation_datetime": "2024-03-04", "file_name": "2402.09997v1.md", "file_path": "paper_data/2402.09997v1.md", "file_size": 60798, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }