doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
de904f5c-af5a-46a5-a5f9-36346f76f485
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 2. Background And Related Work plug-ins for code writing and execution (Paranjape et al., 2023; Wei et al., 2023). (Yang et al., 2023) use LLMs to directly generate solutions to optimization problems without calling traditional solvers through prompt optimization to improve performance. The approach is limited to small problems since the performance of LLMs degrades as the input context grows, even for explicitly long-context models (Liu et al., 2023). Chatbots for Optimization. In a recent paper, Chen et al. (2023) developed a chatbot to help users detect and fix infeasible optimization problems expressed in Pyomo code and servers as an AI assistant rather than as a solver. Li et al. (2023) designed a chatbot to answer naturallanguage queries about an optimization model. Alibaba Cloud (2022) also developed a chatbot to facilitate optimization modeling, but there is no public paper or documentation available on it. Benchmark-driven Optimization Modeling. More closely related to our approach, (Ramamonjison et al, 2023) introduced a dataset of 1101 natural language representations of LP problems. They proposed a two-stage mapping from the natural-language representation to the problem formulation using an intermediate representation. (Ramamonjison et al, 2022) designed a system to simplify and improve the modeling experience for operations research, but did not offer an end-to-end solution. (Anonymous, 2024) presented a multi-agent cooperative framework to automatically model and program complex operation research (OR) problems, and evaluated it on NL4Opt and another more complex dataset they curate. In terms of traditional MILP benchmarking, it should be noted that MIPLIB is widely recognized as a benchmark for evaluating the performance of MILP solvers. MIPLIB offers a diverse collection of MILP instance realizations that are, for the most part, detached from their original formulations. This paper focuses primarily on the modeling aspects of MILPs and therefore does not have a direct correlation with MIPLIB.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e82d7ee5-f6ac-47cc-a5eb-e2a42cf819fd
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 3. Methodology This section details the design of OptiMUS. See Figure 1 for an illustration. The problem presented in Figure 2 serves as a running example. OptiMUS starts with a natural language description of the optimization problem. The problem is first preprocessed to extract the parameters, constraints, objective function, and background information. Then OptiMUS uses a multi-agent framework to process and solve the structured problem. Appendix B includes all prompts used in OptiMUS. For brevity, we use the word clause to refer to a constraint or objective.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0f101e80-45a6-456c-a4ca-efbb44c1f1de
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 3.1. Structured Problem The OptiMUS preprocessor converts a natural language description of the problem into a *structured problem* (Figure 2) with the following components: - **Parameters**: A list of parameters of the optimization problem. Each parameter has three components: 1) symbol, 2) shape, and 3) text definition. OptiMUS can choose symbols, infer the shape, and define the parameters if they are not explicitly included in the problem statement. Importantly, numerical data that may be included in the problem statement is omitted from the parameters and stored for later use. This ensures that the parameters are short and easy to include in future prompts. - **Clauses**: A list of the *clauses* (objective and constraints) of the optimization problem. The preprocessor initializes each clause with its natural language description. Later these clauses will be augmented with LATEX formulations and code as well. - **Background**: A short string explaining the real-world context of the problem. This string is included in every prompt to improve common sense reasoning. The preprocessing uses three prompts: the first prompt extracts the parameters, the second segments the problem into objective and constraints, and the third eliminates redundant (e.g., two restatements of the constraint that produc- Parameters M Scalar Number of different machine types MachineTimeCap [M] Capacity of machine time MaterialReq [R, P] Amount of raw material required per unit of product Objective Constraints Pre-processing The factory aims to maximize its prof~ts. Background A factory produces several products. Each product requires different amounts of raw materials, machine time, and labor. Each product generates a specif~c amount of revenue. The factory needs to determine how much of each product to produce to maximize prof~ts while not exceeding resource capacities. A factory produces different products, each requiring various amounts of raw materials, machine time, and labor. These products generate specif~c revenues upon sale.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c3833446-2353-4028-ac07-39dc8e7e9649
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## Algorithm 1 Workflow Of Optimus 1: **Input:** Natural language description of problem P 2: P (0) ← PREPROCESS(P) 3: Initialize msg ← "" 4: Initialize conversation ← [] 5: **for** t = 1*, . . .* do 6: AGENT, task ← MANAGER(conversation) 7: P (t+1), msg ← AGENT(P (t), task) 8: conversation += msg 9: if msg = Done **then** break
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2c3ede62-60fb-4e7e-b0c6-aeaafb6f4e30
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 10: End tion quantity is nonnegative), unnecessary (such as facts about the problem parameters, e.g., that price is nonnegative), and incorrect constraints (e.g., production quantity must exactly equal demand). The second step can also be a challenge: for example, in the factory example shown in Figure 2, the production amount for each product should be a positive value, but this is not stated explicitly.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
13bb991d-4200-4af1-84de-25ff75c70374
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 3.2. Agents After preprocessing, OptiMUS defines problem variables, formulates and codes each clause. To ensure consistency of the formulations, OptiMUS constructs and maintains a connection graph to record which variables and parameters appear in each constraint. This connection graph is key to performance and scalability of OptiMUS, as it allows the LLM to focus only on the relevant context for each prompt, generating more stable results. The list of variables and the LATEX formulations and code are initially empty; when Production quantities of products are non�negative. Production quantities of products are integral. Total raw materials used for all products cannot exceed MaterialCapacity. Total labor used for all products cannot exceed LaborCapacity. all clauses are formulated, programmed, and validated, the process is complete. Manager. Inspired by (Wu et al., 2023), OptiMUS uses a manager agent to coordinate the work of formulation, programming, and evaluation, recognizing that these steps may need to be repeated to ensure consistency and correctness (see Algorithm 1). At each step, the manager looks at the conversation so far and chooses the next agent (formulator, programmer, or evaluator) to process the problem. The manager also generates and assigns a *task* to the chosen agent, for example: Review and fix the formulation of the objective.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
44f3318a-08bc-4ae2-8bca-6d344766efac
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## Formulator. The Formulator Agent Must: 1. Write and correct mathematical formulations for variables and clauses. 2. Define new variables and auxiliary constraints. 3. Update the links in the connection graph. If the assigned task is to formulate new clauses, the formulator iterates over the clauses that have not yet been formulated and generates new formulations for them. During this process, it will also define auxiliary constraints and new variables when necessary. Moreover, it decides which parameters and variables are related to the clause (see Figure 3). This information is used to update the connection graph. On the other hand, if the task is to fix incorrect formulations reported by the evaluator or the programmer, the agent iterates through the clauses marked as incorrect, | Objective | |-------------------------------| | MaterialReq | | Maximize the total revenue | | [R, P] | | from producing various | | products | | MaterialCap | | [R] | | Constraints | | LaborCap | | For each material, the amount | | [L] | | used should not exceed the | | available capacity. | | Variables | fixes their formulations, and updates the connection graph. OptiMUS also has an extra modeling layer that captures special model structures (e.g., special-ordered
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d016082e-e371-4c06-8384-d80676979dbf
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## Formulator. The Formulator Agent Must: | For each material, the amount | | [L] | | used should not exceed the | | available capacity. | | Variables | fixes their formulations, and updates the connection graph. OptiMUS also has an extra modeling layer that captures special model structures (e.g., special-ordered-set and indicator variables) and we leave a more detailed discussion to the Appendix A. Programmer. The responsibility of the programmer agent is to write and debug the solver code. When the programmer is called by the manager, it first reads the task. If the task is to program new clauses, the agent iterates over the clauses that have not yet been coded and generates code from their formulations. If the task is to fix incorrect formulations reported by the evaluator, the agent iterates through the clauses marked as bogus and fixes their codes. In our experiments, the programmer uses Python as the programming language and Gurobi as the solver. Opti- MUS can target other solvers and programming languages as long as they are supported by the LLM. Evaluator. The evaluator agent's responsibility is to execute the generated code on the data and to identify any errors that occur during the execution. If evaluator faces a runtime error, it flags the variable or clause with the bogus code and responds to the manager with appropriate explanation of the error. The information will later be used by the other agents to fix the formulation and debug the code.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
66278201-9c89-4bb1-8d1a-fc36fa83cde4
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 3.3. The Connection Graph Recall from Section 3.2 that OptiMUS maintains a connection graph over constraints, objectives, parameters, and | Objective | |-------------------------------| | MaterialReq | | Maximize the total revenue | | [R, P] | | from producing various | | products | | MaterialCap | | [R] | | Constraints | | LaborCap | | For each material, the amount | used should not exceed the available capacity. Variables Production variables. OptiMUS uses this graph to retrieve the relevant context for each prompt so prompts remain short. This graph is used also to generate and debug code and to correct wrong formulations. Figure 4 provides an example.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c5e7d36f-5f23-4de6-84d7-0ffa959b5128
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 4. Experiments In this section, we conduct a comprehensive evaluation of OptiMUS. We begin by detailing the datasets used in our experiments and showcase the superior performance of OptiMUS across these datasets, highlighting its strengths. An ablation study demonstrates the impact of different system components on our results, and a sensitivity analysis probes the internal dynamics of OptiMUS. We conclude this section by identifying failure cases and potential areas for further improvement.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
199b586b-6cc3-49cc-a214-711a916aa3aa
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## 4.1. Dataset NL4OPT. This dataset is a collection of 1101 easy linear programming problems proposed as part of the NL4OPT competition (Ramamonjison et al, 2023). The dataset contains a natural language description of each problem, along with an annotated *intermediate representation* that lists parameters, variables, and clauses. ComplexOR. ComplexOR is a collection of 37 complex operations research problems in a variety of application domains (Anonymous, 2024). At the time of writing this paper, the publicly available version of this dataset is incomplete. We gathered 21 problems from the ComplexOR dataset to use in our experiments by augmenting the prob- Task: Debug the runtime error for the material capacity limit constraint
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c5948001-890c-4cdb-a9f5-85cba21f0c4b
# Optimus: Scalable Optimization Modeling With (Mi)Lp Solvers And Large Language Models ## Prompt The execution of the following code results in a runtime error: MachineTimeCap LaborCap import numpy as np import gurobipy as gp MaterialCap For each R = data["R"] # scalar parameter material, the MachineTimeReq amount used should not exceed P = data["P"] # scalar parameter MaterialCap = np.array(data["MaterialCap"]) # ['R'] MaterialReq = np.array(data["MaterialReq"]) # ['R', 'P'] LaborReq the available capacity MaterialReq Prod = model.addVars(P, vtype=gp.GRB.CONTINUOUS, name="production") RevenuePerProd Production
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
61cbb87d-a1e9-4a1e-89b3-594f861119d7
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") Here is the error message: IndexError: index 4 is out of bounds for axis 0 with size 4 Identify the error and fix it. lems that lack data with synthetic data. This modified dataset is available in our supplementary materials. NLP4LP. As shown in Table 1, existing datasets for natural language optimization modeling lack problems with long descriptions. Real-world problems often are much longer, see e.g. (Holzer et al., 2023). To address this issue, we create NLP4LP (Natural Language Processing for Linear Programming), a benchmark consisting of 54 LP and 13 MILP problems (67 instances in total). NLP4LP problems are drawn from textbooks and lecture notes on optimization (Bertsimas & Tsitsiklis, 1997b; Williams, 2013; Nace, 2020), including facility location, network flow, scheduling, portfolio management, and energy optimization problems. These resources were created before 2021, so it is possible parts of these books have been used to train LLMs. However, none of these textbooks includes code. Moreover, our results show that LLMs still find it challenging to formulate and solve these problems. For each instance, NLP4LP includes the description, a sample parameter data file, and the optimal value, obtained either from the textbook solution manual or by solving the instance by hand. Together, NLP4LP and ComplexOR offer a variety of challenging optimization problems with different lengths, facilitating the research on automated optimization modeling.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fa38fe7b-4c2e-4608-ae23-c348711a5da8
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 4.2. Overall Performance To evaluate the overall performance of OptiMUS, we compare it with standard prompting, Reflexion, and Chain-of- Experts (CoE) (Shinn et al., 2023; Anonymous, 2024). Reflexion is the highest-performing general-purpose framework and CoE is the state-of-the-art method for naturallanguage optimization modeling. Three main metrics have been used in the literature: accuracy, compilation error (CE) rate, and runtime error (RE) rate. However, a method can generate a totally irrelevant short code that runs, or fix runtime and complication errors by completely removing relevant sections of the code. Hence, we only compare the models' accuracy. Results are presented in Table 2. OptiMUS outperforms all other methods in all datasets by a large margin. This remarkable performance improvement highlights the importance of modularity and structure compared to a single prompt to solve complex problems using LLMs. The next experiments clarify which features of OptiMUS contribute to its good performance.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c006511e-9771-4c56-9236-972f164982b3
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 4.3. Ablation Study Table 3 shows the impact of debugging and of the choice of LLM on the performance of OptiMUS. One interesting observation is the significant performance drop that occurs when smaller LLMs are used instead of GPT-4. The first reason is that the OptiMUS prompts are on average longer than the other methods and involve more complicated reasoning. Smaller LLMs are worse at reasoning (Wang et al., 2023; OpenAI, 2023). The second reason is the novel and modular structure of OptiMUS's prompts. Prompts used in the other methods mostly adhere to a questions answering format that is abundant in the public domain (e.g. posting the whole bogus code snippet and asking for the cor- | | NL4OPT | ComplexOR | NLP4LP | |----------------|----------|-------------|----------| | Standard | 47.3% | 9.5% | 35.8% | | Reflexion | 53% | 19.1% | 46.3% | | CoE | 64.2% | 38.1% | 53.1% | | OptiMUS (Ours) | 78.8% | 66.7% | 72.0% | | | NL4OPT | ComplexOR | NLP4LP | |-----------------|----------|-------------|----------| | OptiMUS (GPT-4) | 78.7% | 66.7% | 71.6 % | | w/o debugging | 72.3% | 57.1% | 58.2% | | w/ GPT-3.5 Mngr | 74.9% | 52.4% | 53.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
341c3e16-a112-4c3f-8052-7d2d7563444f
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 4.3. Ablation Study | NLP4LP | |-----------------|----------|-------------|----------| | OptiMUS (GPT-4) | 78.7% | 66.7% | 71.6 % | | w/o debugging | 72.3% | 57.1% | 58.2% | | w/ GPT-3.5 Mngr | 74.9% | 52.4% | 53.7% | | w/ GPT-3.5 | 28.6% | 9.5% | 14.4% | | w/ Mixtral-8x7B | 6.6% | 0.0% | 3.0% | rect version is common on StackOverflow, or writing the whole problem description and then the complete formulation is common in optimization textbooks). However, in OptiMUS, the prompts are more complex and not common in human-human interactions. Smaller LLMs have limited generalization and reasoning abilities and, therefore, show poor performance on such prompts (OpenAI, 2023). Fine-tuning smaller models on these novel prompt templates might improve their performance and reduce the cost of running a system like OptiMUS. We also evaluated a version of OptiMUS which uses GPT- 3.5 for the manager and GPT-4 for the other agents. We can see that in NL4OPT the difference in performance is small. The reason is that most instances of NL4OPT are solved with a simple chain of formulation-programmingevaluation. However, in ComplexOR and NLP4LP where more complicated interactions between agents are required, the manager's importance becomes more visible. Moreover, we did experiments in which the debugging feature of the programmer agent was disabled. Similarly to the manager, we see that debugging is more important in more complicated datasets.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
22564afe-01d5-41fa-9c27-413217066b50
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 4.4. Sensitivity Analysis Figure 5 shows how the maximum number of times the manager is allowed to select agents affects the accuracy. For NL4OPT, most problems are solved by selecting each of the formulator, programmer, and evaluator agents only once. However, for ComplexOR and NLP4LP, OptiMUS often makes mistakes at the beginning and iteratively fixes them by selecting the other agents multiple times. Section 4.5 shows the number of times each agent is selected per instance. As expected, the average selection frequency is higher in ComplexOR and NLP4LP. Moreover, programmer and evaluator agents are selected more often than the formulator. This bias is reasonable: - Coding errors are more common. LLMs often generate code with trivial bugs that are easy to fix. In OptiMUS, the programmer agent fixes such bugs. - Coding errors are easier to identify and fix. In contrast, identifying bugs in the formulation require deeper reasoning and is harder. Hence the manager in OptiMUS is prompted to prioritize fixing the code before considering errors in the formulation. The formulator is only selected for debugging if the programmer claims that the code is correct. Hence in our experiments, we observe the programmer is selected more often than the formulator. Table 4 shows the average prompt length of OptiMUS and CoE for different data sets. Observe that the prompt length for OptiMUS barely changes across datasets, while the prompt length for CoE increases on more challenging datasets. The reason is the modular approach, which allows OptiMUS to extract and process only the relevant context for each LLM call. Unlike non-modular methods, Opti- MUS can scale to larger and longer problems.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2f2f512d-825a-424a-ab78-f4aac5259d94
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 4.5. Failure Cases To understand its strengths and weaknesses, we analyze the most common reasons why OptiMUS fails (Table 5). We categorize failure cases into the following groups: - Missing or wrong constraints: OptiMUS generates wrong constraint in the preprocessing step (e.g., price ≥ 0 where price is a parameter), or fails to extract all of the constraints from the description. - Incorrect model: OptiMUS tackles the problem with an incorrect mathematical model (e.g., defining binary variables for visiting cities instead of links in TSP). - Coding error: OptiMUS does not generate error-free code even after debugging. Coding errors often occur when the LLM is confused by the language used (e.g., in the "prod" problem in ComplexOR, the description explicitly refers to "parameters" and "variables"). We normalize the failure rates to sum to 1.0. Incorrect modeling is more common on datasets with more complicated problems, while on the easier dataset NLP4OPT, the model is less likely to be wrong. Understanding and interpreting the problems is also challenging for LLMs, resulting in formulations with missing constraints and wrong constraints. Fine-tuning might improve the performance of LLMs on this task, and is an important direction for future research.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3dd79e69-b527-4428-87d7-f1feb28645d2
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 5. Conclusion How can LLMs collaborate and divide work in order to achieve complex goals? This paper interrogates this question in the domain of optimization and showcases the importance of modular structure. We develop OptiMUS, a modular LLM-based agent designed to formulate and solve optimization problems from natural language descriptions. Our research serves as a proof-of-concept, illustrating the potential for automating various stages of the optimization process by combining LLMs with traditional solvers. To showcase the performance of OptiMUS, we released NLP4LP, a dataset of long and challenging optimization problems to demonstrate the efficacy of the techniques implemented within OptiMUS. OptiMUS achieves SOTA performance across all existing datasets, and scales to prob- | NL4OPT | ComplexOR | NLP4LP | |---------------------|--------------------------|-----------| | CoE | 2003 | | | ± | | | | 456 | 3288 | | | ± | | | | 780 | 3825 | | | ± |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8d722315-cde7-47e1-8716-f8c1522c0072
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 5. Conclusion | | | | 780 | 3825 | | | ± | | | | 1002 | | | | OptiMUS | 2838 | | | ± | | | | 822 | 3241 | | | ± | | | | 1194 | 3146 | | | ±
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a9b4c143-6ebd-495c-ac5f-5d6fa62827e6
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 5. Conclusion | | ± | | | | 1194 | 3146 | | | ± | | | | 1145 | | | | Table 5. | When OptiMUS fails, why? | | | Mistake | NL4OPT | ComplexOR | | Incorrect modeling | 43.0% | 62.5% | | Missing constraints | 36.0% | 12.6% | | Coding errors | 21.0% | 24.9% | lems with large amounts of data and long descriptions. Real-world optimization problems are often complex and multifaceted. Developing LLM-based solutions for these problems requires domain-specific considerations, including integrating existing optimization techniques to leverage problem structure. We are at the early stages of this research, but anticipate significant developments that will enable these systems to address more complex, industrial
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2c1731d9-9ea9-456d-83d0-d4f5f7c3a8e9
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 5. Conclusion % | | Coding errors | 21.0% | 24.9% | lems with large amounts of data and long descriptions. Real-world optimization problems are often complex and multifaceted. Developing LLM-based solutions for these problems requires domain-specific considerations, including integrating existing optimization techniques to leverage problem structure. We are at the early stages of this research, but anticipate significant developments that will enable these systems to address more complex, industriallevel problems. It is interesting to notice that the challenge of using AI for an applied domain is much larger in safetycritical domains such as self-driving, which demand extremely high accuracy, than in domains where AI can function as an assistant and where answers are easy to check, as in theorem-proving or optimization. Here, AI systems with moderate accuracy can still usefully augment human effort. Future directions. Smaller LLMs are faster and cheaper, but our experiments indicate that they perform poorly in optimization modeling out-of-the-box. Identifying which prompts might benefit from fine-tuned small models and which require large (and expensive) LLM calls is an important topic for future research. Furthermore, we believe that integrating user feedback into the process can improve the performance of agents on natural-language optimization modeling. Studying interactions between such agents and their users is an exciting avenue. Another important direction is to automatically select the best solver based on a comprehensive evaluation of both accuracy and runtime requirements. Additionally, it would be interesting to see how the modular LLM structure presented here can be enhanced using reinforcement learning to teach the manager how to choose the next agent.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6ad56faa-41bc-4958-b7c9-8babd28d0ccc
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## 6. Impact Statement This paper presents work whose goal is to advance the field of optimization modeling. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6c4b5cef-8bbf-4811-8e3c-49e1fcc73484
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## A. Optimization Techniques Optimization solvers exploit problem-specific structure to improve performance when solving MILPs (Gamrath et al., 2016) and often provide a customized interface for these special structures. Using the interface not only reduces the complexity of (and potential for errors in) auxiliary variables or constraints, but also informs the solver about the existence of structure that can be exploited to solve the problem faster. Moreover, the solver's performance can suffer when these structures are not signaled in the model. For example, a bad choice of big-M coefficient when reformulating an indicator variable can reduce the strength of the linear relaxation. Typical examples of structure include Special Ordered Set (SOS) (Beale & Forrest, 1976), indicator variables, and general constraints (Bertsimas & Tsitsiklis, 1997a). Although state-of-the-art optimization solvers can detect some problem structures automatically, it works better to specify structure during problem formulation. Hence the formulator is prompted to leverage advanced optimization techniques and structures, including 1) Special Ordered Set. 2) Indicator variable. 3) General constraints. 4) SAT and constraint programming problem. 5) Totally unimodular problem detection. OptiMUS iterates through a sequence of "cheatsheet" prompts (Figure 7), each corresponding to one of these structures. Within each prompt, the LLM is provided with the description of the structure, explained by an example illustrating how the structure should be exploited. The LLM is asked to decide whether the structure can be applied to the existing formulation. Upon identifying the appropriate structure, the formulation is adjusted to utilize the customized solver interface when available.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
119ead41-5814-46e4-abcb-df9a55325b99
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## B. Prompts B.1. Manager Prompt You're a manager in a team of optimization experts. The goal of the team is to solve an optimization problem. Your task is to choose the next expert to work on the problem based on the current situation. - The user has already given us the problem description, the objective function, and the parameters. Only call the user proxy if there is a problem or something ambiguous or missing. Here's the list of agents in your team: {agents} And here's the history of the conversation so far: {history} Considering the history, if you think the problem is solved, type DONE. Otherwise, generate a json file with the following format: {{ "agent_name": "Name of the agent you want to call next", "task": "The task you want the agent to carry out" }} to identify the next agent to work on the problem, and also the task it has to carry out. - If there is a runtime error, ask the prorammer agent to fix it. - Only generate the json file, and don't generate any other text. - If the latest message in history says that the code is fixed, ask the evaluator agent to evaluate the code!
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dc2f488c-7af7-4fe9-bdb8-a44af7ca8f60
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## B.2. Formulation Generation Prompt You are an expert mathematical formulator and an optimization professor at a top university. Your task is to model {clausType} of the problem in the standard LP or MILP form. Here is a {clausType} we need you to model: {targetDescription} Here is some context on the problem: {background} Here is the list of available variables: {variables} And finally, here is list of input parameters: {parameters} First, take a deep breath and explain how we should define the {clausType}. Feel free to define new variables if you think it is necessary. Then, generate a json file accordingly with the following format (STICK TO THIS FORMAT!): {{ "{clausType}": {{ "description": "The description of the {clausType}", "formulation": "The LaTeX mathematical expression representing the formulation of the {clausType}" }}, "auxiliary_constraints": [ {{ "description": "The description of the auxiliary constraint", "formulation": "The LaTeX mathematical expression representing the formulation of the auxiliary constraint" }} ] "new_variables": [ {{ "definition": "The definition of the variable", "symbol": "The symbol for the variable", "shape": [ "symbol1", "symbol2", ... ] }} ], }} - Your formulation should be in LaTeX mathematical format (do not include the $ symbols). - Note that I'm going to use python json.loads() function to parse the json file, so please make sure the format is correct (don't add ',' before enclosing '}}' or ']' characters. - Generate the complete json file and don't omit anything. - Use '```json' and '```' to enclose the json file. - Important: You can not define new parameters. You can only define new variables.Use CamelCase and full words for new variable symbols, and do not include indices in the symbol (e.g. ItemsSold instead of itemsSold or items_sold or ItemsSold_i) - Use \\textup{{}} when writing variable and parameter names. For example (\\sum_{{i=1}}^{{N}} \\textup{{ItemsSold}}_{{i}} instead of \\sum_{{i=1}}^{{N}} ItemsSold_{{i}}) - Use \\quad for
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da9943db-72a0-44a4-ab61-7126ce3261b7
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## B.2. Formulation Generation Prompt ose the json file. - Important: You can not define new parameters. You can only define new variables.Use CamelCase and full words for new variable symbols, and do not include indices in the symbol (e.g. ItemsSold instead of itemsSold or items_sold or ItemsSold_i) - Use \\textup{{}} when writing variable and parameter names. For example (\\sum_{{i=1}}^{{N}} \\textup{{ItemsSold}}_{{i}} instead of \\sum_{{i=1}}^{{N}} ItemsSold_{{i}}) - Use \\quad for spaces. - Use empty list ([]) if no new variables are defined. - Always use non-strict inequalities (e.g. \\leq instead of <), even if the constraint is strict. - Define auxiliary constraints when necessary. Set it to an empty list ([]) if no auxiliary constraints are needed. If new auxiliary constraints need new variables, add them to the "new_variables" list too. Take a deep breath and solve the problem step by step.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5f4a77e8-ad2b-4ad6-8421-a941fc154d6d
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## B.3. Formulation Fixing Prompt You are a mathematical formulator working with a team of optimization experts. The objective is to tackle a complex optimization problem, and your role is to fix a previously modelled {target}. Recall that the {target} you modelled was {constraint} and your formulation you provided was {formulation} The error message is {error} Here are the variables you have so far defined: {variables} Here are the parameters of the problem {parameters} Your task is carefully inspect the old {target} and fix it when you find it actually wrong. After fixing it modify the formulation. Please return the fixed JSON string for the formulation. The current JSON is {json} Take a deep breath and solve the problem step by step.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fec053db-f01b-4dce-ae46-8ab2088558f1
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## B.4. Clause Coding Prompt You're an expert programmer in a team of optimization experts. The goal of the team is to solve an optimization problem. Your responsibility is to write {solver} code for different {target}s of the problem. Here's a {target} we need you to write the code for, along with the list of related variables and parameters: {context} - Assume the parameters and variables are defined, and gurobipy is imported as gp. Now generate a code accordingly and enclose it between "=====" lines. - Only generate the code and the ===== lines, and don't generate any other text. - If the {target} requires changing a variable's integrality, generate the code for changing the variable's integrality rather than defining the variable again. - If there is no code needed, just generate the comment line (using # ) enclosed in ===== lines explaining why. - Variables should become before parameters when defining inequality {target}s in gurobipy (because of the gurobi parsing order syntax) Here's an example: **input**: {{ "description": "in month m, it is possible to store up to storageSize_{{m}} tons of each raw oil for use later.", "formulation": "\(storage_{{i,m}} \leq storageSize, \quad \\forall i, m\)", "related_variables": [{{ "symbol": "storage_{{i,m}}", "definition": "quantity of oil i stored in month m", "shape": [ "I", "M" ] }}], "related_parameters": [{{ "symbol": "storageSize_{{m}}", "definition": "storage size available in month m", "shape": [ "M" ] }}] }} ***output***: ===== # Add storage capacity constraints for i in range(I): for m in range(M): model.addConstr(storage[i, m] <= storageSize[m], name="storage_capacity") ===== Take a deep breath and approach this task methodically, step by step.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7a1c8028-52ed-4e02-8848-ebdc64001785
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## B.5. Variable Coding Prompt You're an expert programmer in a team of optimization experts. The goal of the team is to solve an optimization problem. Your responsibility is to write {solver} code for defining variables of the problem. Here's a variable we need you to write the code for defining: {variable} Assume the parameters are defined. Now generate a code accordingly and enclose it between "=====" lines. Only generate the code, and don't generate any other text. Here's an example: **input**: {{ "definition": "Quantity of oil i bought in month m", "symbol": "buy_{{i,m}}", "shape": ["I","M"] }} ***output***: ===== buy = model.addVars(I, M, vtype=gp.GRB.CONTINUOUS, name="buy") ===== - Note that the indices in the symbol (what comes after _) are not a part of the variable name in code. - Use model.addVar instead of model.addVars if the variable is a scalar. Take a deep breath and solve the problem.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
37ddaec3-3ca8-47f9-b3ae-e701d005b2de
# Add constraints for the quantity of raw material usage not exceeding available amounts for j in range(N): model.addConstr(gp.quicksum(MaterialReq[j, i] * Prod[i] for i in range(P)) \\ <= Available[j], name=f"material_usage_limit_{j}") ## B.6. Debugging Prompt You're an expert programmer in a team of optimization experts. The goal of the team is to solve an optimization problem. Your responsibility is to debug the code for the problem. When running the following code snippet, an error happened: {context_code} {error_line} and here is the error message: {error_message} We know that the code for importing packages and defining parameters and variables is correct, and the error is because of the this last part which is for modeling the {target}: {error_line} First reason about the source of the error. Then, if the code is correct and the problem is likely to be in the formulation, generate a json in this format (the reason is why you think the problem is in the formulation): {{ "status": "correct", "reason": "A string explaining why you think the problem is in the formulation" }} otherwise, fix the last part code and generate a json file with the following format: {{ "status": "fixed", "fixed_code": "A sting representing the fixed {target} modeling code to be replaced with the last part code" }} - Note that the fixed code should be the fixed version of the last part code, not the whole code snippet. Only fix the part that is for modeling the {target}. - Do not generate any text after the json file. - Variables should become before parameters when defining inequality constraints in gurobipy (because of the gurobi parsing order syntax) - The parameter shapes are parameters definitions are correct. Take a deep breath and solve the problem step by step.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10172v1.md", "file_path": "paper_data/2402.10172v1.md", "file_size": 50686, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4ec798a2-f94c-4f3f-8eea-e6dec62bdf0c
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity Ziyang Ma♠, Guanrou Yang♠, Yifan Yang♠, Zhifu Gao♡, Jiaming Wang♡, Zhihao Du♡, Fan Yu♡, Qian Chen♡, Siqi Zheng♡, Shiliang Zhang♡, Xie Chen♠†, ♠ MoE Key Lab of Artificial Intelligence, AI Institute, X-LANCE Lab, Shanghai Jiao Tong University, Shanghai, China ♡ Alibaba Group, China
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ae913043-c018-40ee-ae72-a133c5aeb567
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## Abstract In this paper, we focus on solving one of the most important tasks in the field of speech processing, i.e., automatic speech recognition (ASR), with speech foundation encoders and large language models (LLM). Recent works have complex designs such as compressing the output temporally for the speech encoder, tackling modal alignment for the projector, and utilizing parameter-efficient fine-tuning for the LLM. We found that delicate designs are not necessary, while an embarrassingly simple composition of off-the-shelf speech encoder, LLM, and the only trainable linear projector is competent for the ASR task. To be more specific, we benchmark and explore various combinations of LLMs and speech encoders, leading to the optimal LLM-based ASR system, which we call SLAM-ASR1. The proposed SLAM-ASR provides a clean setup and little task-specific design, where only the linear projector is trained. To the best of our knowledge, SLAM-ASR achieves the best performance on the Librispeech benchmark among LLM-based ASR models and even outperforms the latest LLM-based audio-universal model trained on massive pair data. Finally, we explore the capability emergence of LLM-based ASR in the process of modal alignment. We hope that our study can facilitate the research on extending LLM with cross-modality capacity and shed light on the LLM-based ASR community. 1 Introduction Automatic speech recognition (ASR) stands as a cornerstone in the realm of intelligent speech technology, enabling machines to understand and transcribe human speech. The significance of ASR in enhancing human-computer interaction and accessibility makes it a crucial area of research and applications in the field of speech processing. The evolution of ASR technology has been marked by the adoption of various paradigms, each representing a leap forward in terms of accuracy, efficiency, and applicability (Li, 2022). Among these, supervised methods including connectionist temporal classification (CTC) (Graves et al., 2006), attention-based encoder-decoder (AED) (Chan et al., 2016), recurrent neural network transducer (RNN-T) (Graves et al., 2013) and their variants have been pivotal. In addition, employing selfsupervised methods for pre-training followed by supervised methods for fine-tuning has also proven to be effective (Baevski et al
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0a0725e7-3d69-44f6-8598-f6bf648b8b51
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## Abstract of various paradigms, each representing a leap forward in terms of accuracy, efficiency, and applicability (Li, 2022). Among these, supervised methods including connectionist temporal classification (CTC) (Graves et al., 2006), attention-based encoder-decoder (AED) (Chan et al., 2016), recurrent neural network transducer (RNN-T) (Graves et al., 2013) and their variants have been pivotal. In addition, employing selfsupervised methods for pre-training followed by supervised methods for fine-tuning has also proven to be effective (Baevski et al., 2020; Hsu et al., 2021; Chen et al., 2022; Ma et al., 2023; Yang et al., 2023). However, each paradigm comes with its own set of challenges and limitations, such as the need for extensive labeled data, difficulties in capturing long-range context dependencies in speech, and huge training costs. In this evolving landscape, the advent of large language models (LLMs) has introduced a groundbreaking paradigm: Multimodal large language models (MLLMs) framework (Liu et al., 2023; Li et al., 2023a; Gao et al., 2024), based on a decoderonly architecture. This innovative approach diverges from traditional ASR by utilizing the immense generative capacity of LLMs, which are pre-trained on vast corpora encompassing diverse linguistic contexts, leading to LLM-based ASR. The evolution of the ASR paradigm from previous NN-based ASR models to LLM-based ASR models, stresses differences across loss and criterion design, text prior knowledge, and model scale. This paradigm harnesses pre-existing linguistic knowledge, enabling a more holistic understanding of language, which in turn, translates to significant improvements in the speech recognition task. The architecture of LLM-based ASR can be conceptualized as consisting of three primary components: a speech encoder, a projector, and an LLM. Recent works in LLM-based ASR often venture into complex designs, such as compressing the output temporally from the speech encoder (Wu et al., 2023; Fathullah et al., 2023), tackling modal alignment with the projector (Tang et al., 2024; Yu et al., 2024), and fine-tuning the LLM partly or fully (Wu et al., 2023; Li et al.,
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
49907c04-8dc5-40f1-9240-8efd57ce6dc2
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## Abstract task. The architecture of LLM-based ASR can be conceptualized as consisting of three primary components: a speech encoder, a projector, and an LLM. Recent works in LLM-based ASR often venture into complex designs, such as compressing the output temporally from the speech encoder (Wu et al., 2023; Fathullah et al., 2023), tackling modal alignment with the projector (Tang et al., 2024; Yu et al., 2024), and fine-tuning the LLM partly or fully (Wu et al., 2023; Li et al., 2023b; Tang et al., 2024; Wang et al., 2023). Despite these efforts, the outcomes have not always met expectations, indicating a potential misalignment between the complexity of designs and the efficacy of real-world speech recognition tasks. This observation led to a pivotal realization in our research: the essence of an effective LLM-based ASR system lies in the synergy of a powerful speech encoder and a suitable LLM, and then, most notably, a single trainable linear projector is enough to align between modalities. Our findings challenge the prevailing notion that complexity equates to superiority in LLM-based ASR system design. In this work, we first benchmark the automatic speech recognition task performance with different combinations of well-known speech encoders and the latest released large language models. Experiments show that LLMs with supervised fine-tuning (SFT, a.k.a. chat model) perform better than raw pre-trained LLMs for the ASR task, while speech encoders fine-tuned with limited data from selfsupervised models outperform supervised foundation ASR encoders. Building upon these insights, we propose SLAM-ASR, in which only a linear projector is trained to conduct the ASR task. SLAM- ASR only requires 4 GPUs for 4 hours of training to achieve state-of-the-art performance on the Librispeech (Panayotov et al., 2015) corpus, compared with other LLM-based ASR models and a series of previous best performing NN-based ASR models. Besides, our work embarks on an in-depth exploration of the ability of LLM-based ASR models. Interestingly, we observe the capability emergence phenomenon during LLM-based ASR training. The benchmark and experimental exploration show how we harvest the
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
78cfefd6-6acb-47d2-85ba-c033f75b6245
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## Abstract is trained to conduct the ASR task. SLAM- ASR only requires 4 GPUs for 4 hours of training to achieve state-of-the-art performance on the Librispeech (Panayotov et al., 2015) corpus, compared with other LLM-based ASR models and a series of previous best performing NN-based ASR models. Besides, our work embarks on an in-depth exploration of the ability of LLM-based ASR models. Interestingly, we observe the capability emergence phenomenon during LLM-based ASR training. The benchmark and experimental exploration show how we harvest the exciting result step by step with a clean setup and little task-specific design.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ea875bbd-b4b4-4cd2-b8b3-bae308c6f61f
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 2 Speech Recognition Meets Large Language Model 2.1 Previous Nn-Based Asr Previous NN-based ASR systems are designed to align the speech signal with the label sequence accurately. As shown in table 1, different paradigms | Model | Loss | Learnable | |-----------------------|----------------|-------------| | Previous NN-based ASR | | | | Quartznet ( | Kriman et al. | , | | Whisper ( | Radford et al. | , | | Branchformer ( | Peng et al. | , | | Conformer ( | Gulati et al. | , | | Zipformer ( | Yao et al. | , | | Paraformer ( | Gao et al. | , | | LLM-based ASR | | | | LauraGPT ( | Wang et al. | , | | SpeechGPT ( | Zhang et al. | , | | Li et al. | 's ( | 2023b | | SpeechLLaMA ( |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
277a1c52-9377-48b4-aff3-3c485c50f164
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 2 Speech Recognition Meets Large Language Model 2.1 Previous Nn-Based Asr ( | Wang et al. | , | | SpeechGPT ( | Zhang et al. | , | | Li et al. | 's ( | 2023b | | SpeechLLaMA ( | Wu et al. | , | | Qwen-Audio ( | Chu et al. | , | | SALMONN ( | Tang et al. | , | | Decoder- | | | | Only, | | | | Cross | | | | Entropy | | | | Fathullah et al. | 's ( | 2023 | | Yu et al. | 's ( | 2024 | | SLAM-ASR
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
30811563-054e-4b58-ade9-59cb595f87e3
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 2 Speech Recognition Meets Large Language Model 2.1 Previous Nn-Based Asr | | | | Fathullah et al. | 's ( | 2023 | | Yu et al. | 's ( | 2024 | | SLAM-ASR | Linear | | are carried out with a series of representative models. Quartznet (Kriman et al., 2020) leverages CTC (Graves et al., 2006), the first E2E technology widely adopted in ASR, yet facing performance limitations due to its frame-independent assumption. Whisper (Radford et al., 2023) utilizes massive pair speech-text data to train the attentionbased encoder-decoder (Chan et al., 2016) (AED, a.k.a. LAS in ASR) architecture, empowering the model with the ability to recognize and translate speech in multiple languages. Branchformer (Peng et al., 2022) employs a hybrid architecture that combines CTC and AED (Chan et al., 2016), the integration of the attention mechanism addresses this limitation by introducing implicit language modeling across speech frames. Conformer (Gulati et al., 2020) utilizes neural transducer (Graves et al., 2013), which directly discards the frameindependent assumption by incorporating a label decoder and a joint network, resulting in superior performance. Zipformer (Yao et al., 2024) adopts Pruned RNN-T (Kuang et al., 2022), which is a memory-efficient variant of the transducer loss, utilizing the pruned paths with minor posterior probabilities. Paraformer (Gao et al., 2022) uses Continuous Integrate-and-Fire (CIF) (Dong and Xu, 2020), which offers a soft and monotonic alignment mechanism, estimating the number of tokens and generating hidden variables.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
73088e49-d021-4d8b-808f-b31e2a582be1
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 2.2 Existing Llm-Based Asr LLM-based ASR models adopt decoder-only architectures based on a pre-trained LLM as a new paradigm. LauraGPT (Wang et al., 2023) connects a modified Conformer (Gulati et al., 2020) encoder with Qwen-2B (Bai et al., 2023) for endto-end training for multiple speech and audio tasks, with full parameter fine-tuning performed. SpeechGPT (Zhang et al., 2023) discretizes speech tokens with HuBERT (Hsu et al., 2021) and finetunes the LLaMA-13B (Touvron et al., 2023a) with multiple stages. Although both models are computationally expensive, their performance is limited. (Li et al., 2023b) and (Wu et al., 2023) propose to use inserted Gated-XATT-FFN (Alayrac et al., 2022) or side-branched LoRA (Hu et al., 2022) to fine-tune the LLM partially for conducting ASR task, along with a trainable speech encoder. Qwen-Audio (Chu et al., 2023) is an audiouniversal model, which uses massive pair data to fine-tune the encoder initialized from the Whisperlarge (Radford et al., 2023) model, optimized using the loss of the frozen Qwen-7B (Bai et al., 2023) output for backpropagation. All these models require finetuning the encoder. SALMONN (Tang et al., 2024) uses Whisper-large (Radford et al., 2023) and BEATs (Chen et al., 2023) to encode speech and audio, respectively, along with a window-level Q-Former (win-QF), can perform a variety of audio tasks. (Fathullah et al., 2023) connects Conformer with LLaMA-7B to successfully conduct monolingual and multilingual ASR. These models require the use of LoRA to be effective. The most intimate work is (Yu et al., 2024), which achieves good results on ASR using only segmentlevel Q-Former (sef-QF) similar to win-QF as the projector. The random concatenation training strategy is designed to alleviate the natural problem of Whisper (Radford et al
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9026e789-434b-4899-bf81-8b8ef990a633
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 2.2 Existing Llm-Based Asr -Former (win-QF), can perform a variety of audio tasks. (Fathullah et al., 2023) connects Conformer with LLaMA-7B to successfully conduct monolingual and multilingual ASR. These models require the use of LoRA to be effective. The most intimate work is (Yu et al., 2024), which achieves good results on ASR using only segmentlevel Q-Former (sef-QF) similar to win-QF as the projector. The random concatenation training strategy is designed to alleviate the natural problem of Whisper (Radford et al., 2023) requiring an input speech of 30 seconds.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
70140705-15fe-4aa6-9bcb-0448d23967b2
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 2.3 Proposed Method As shown in Figure 1, an embarrassingly simple framework is proposed to train the SLAM-ASR model. For each sample, given speech $\mathbf{X^{S}}$, the corresponding transcript $\mathbf{X^{T}}$, and the prompt $\mathbf{X^{P}}$, we first convert the speech into speech features through the speech encoder, which can be written as: $$\mathbf{H^{S}}=Encoder(\mathbf{X^{S}}),\tag{1}$$ where $\mathbf{H^{S}}=[h_{1}^{S},\cdots,h_{T}^{S}]$ has $T$ frames in the temporal dimension. Due to the sparsity of speech representation, the speech features sequence $\mathbf{H^{S}}$ is still very long for the LLM to tackle${}^{2}$, we downsam general. ple the speech with a downsampler. More explicitly, we concatenate every $k$ consecutive frames in the feature dimension to perform a $k$ times downsampling, leading to $\mathbf{Z^{S}}=[z_{1}^{S},\cdots,z_{N}^{S}]$, where $$z_{i}^{S}=h_{k*i}^{S}\oplus h_{k*i+1}^{S}\oplus\cdots\oplus h_{k*i+k-1}^{S},\tag{2}$$ and $$N=T//k.\tag{3}$$ Next, a projector is applied to transform the speech features ZS into ES with the same dimension as the LLM input embedding. In our experiments, we use a single hidden layer followed by a ReLU activation and a regression layer as the projector, donated as: ES = Linear(ReLU(*Linear*(ZS))). (4) Finally, we feed the speech embedding ES, transcript embedding ET, and prompt embedding EP into the template to compose the final input E of LLM, donated as: $$\mathbf{E^{T}}=Tokenizer(\mathbf{X^{T}}),\tag{5}$$ $$\mathbf{E^{P}}=Tokenizer(\mathbf
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
91ca198b-c8a5-4a70-a82d-a4dbdb2691ef
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 2.3 Proposed Method we use a single hidden layer followed by a ReLU activation and a regression layer as the projector, donated as: ES = Linear(ReLU(*Linear*(ZS))). (4) Finally, we feed the speech embedding ES, transcript embedding ET, and prompt embedding EP into the template to compose the final input E of LLM, donated as: $$\mathbf{E^{T}}=Tokenizer(\mathbf{X^{T}}),\tag{5}$$ $$\mathbf{E^{P}}=Tokenizer(\mathbf{X^{P}}),$$ (6) $$\mathbf{E}=\begin{cases}Template(\mathbf{E^{S}},\mathbf{E^{P}},\mathbf{E^{T}})&\text{if training},\\ Template(\mathbf{E^{S}},\mathbf{E^{P}})&\text{if inference},\end{cases}\tag{7}$$ wherein the template is detailed in Section 3.3 and Section 3.4.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
742848cf-c298-4c66-b962-b113ecc3e95e
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 3 Experiment Setup Our experimental procedure obeys the KISS (Keep It Simple, S*tupid!)* principle to investigate the most critical factors for LLM-based ASR.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
44151d62-745f-4553-8fd8-755bfea31a03
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 3.1 Models And Modules 3.1.1 Speech Encoder Two types of speech encoders are investigated in this paper, which are supervised speech encoders trained on massive speech-text pair data and selfsupervised speech encoders trained on large-scale unlabeled speech data. For supervised foundation models, we mainly survey the well-known Whisper (Radford et al., 2023) family of models3 ranging from tiny to large, including whisper-tiny, whisper-base, whisper-small, whisper-medium and whisper-large-v2. We discard the decoder of each Whisper model and only use the encoder as a feature extractor. We also investigate Qwen-Audio Encoder4, the encoder fine-tuned from whisperlarge-v2 checkpoint on large-scale speech, audio and music data, released along with Qwen- Audio (Chu et al., 2023) model. For self-supervised models, we investigate *HuBERT*5 and *WavLM*6 in different scales, either raw pre-trained or further fine-tuned. For the base-size models, both HuBERT (Hsu et al., 2021) and WavLM (Chen et al., 2022) perform self-supervised pre-training on LibriSpeech (Panayotov et al., 2015) corpus with 960 hours. For the large-size models, Hu- BERT is trained on LibriLight (Kahn et al., 2020) corpus with 60, 000 hours, while WavLM is trained on the much larger 94, 000 hours data including LibriLight (Kahn et al., 2020), VoxPopuli (Wang et al., 2021), and GigaSpeech (Chen et al., 2021). Furthermore, HuBERT provides pre-trained models of X-Large size, which is the largest publicly available self-supervised speech encoder. All the models mentioned in this section are obtained from their official repositories. Refer to Section 4.3 for details of the parameters and hidden size of each specific model.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1ad6ba37-d416-4531-ae90-d0eb6c175be9
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 3.1.2 Llm Two Types Of Large Language Models Are Investigated In This Paper, Which Are Raw Pre-Trained Llms 3https://github.com/openai/whisper 4https://github.com/QwenLM/Qwen-Audio 5https://github.com/facebookresearch/fairseq/ tree/main/examples/hubert 6https://github.com/microsoft/unilm/tree/ master/unilm without supervised fine-tuning and chat LLMs with SFT (along with RLHF if conducted). For the pre-trained LLMs, we try *TinyLLaMA* (Zhang et al., 2024)7 of the 1B-magnitude and LLaMA- 2 (Touvron et al., 2023b)8 of the 7B-magnitude. For the chat LLMs, *TinyLLaMA-Chat*9 of the 1B- magnitude, *Phi-2*10 of the 2B-magnitude, LLaMA- 2-Chat11 and *Vicuna* (Chiang et al., 2023)12 of the 7B-magnitude are considered. Refer to Section 4.2 for details of the parameters and hidden size of each specific LLM. 3.1.3 Projector The projector can be viewed as an adaptor for other modalities to perform alignment with LLM. In all our experiments, the output of the speech encoder is 50 Hz, and the downsampling rate k = 5, leading to the input speech features ES of the large model being 10 Hz. The hidden layer dimension is set to 2048, while the dimension of the speech encoder output HS and the LLM input dimension vary depending on the model used, respectively.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ed334381-4a67-48b0-ae29-ae078167ca9f
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 3.2 Dataset To evaluate the capabilities of the LLM-based ASR models, we use the most widely used benchmark for the ASR task, the standard Librispeech benchmark with 960 hours of training data without any data augmentation or splicing. We use the devother subset as the validation set and test-clean/testother as the test sets, each of which contains 10 hours of speech.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b60c5b9b-df4d-45bf-a790-c49c27c1c04d
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 3.3 Training Detail During training, the data is organized in the following format: "USER: <S> <P> ASSISTANT: <T>", where *<S>* represents speech embedding, <P> represents the prompt, and *<T>* represents the corresponding transcribed text. We only compute the loss on *<T>*, as is common practice. For the optimizing strategy, we use AdamW (Loshchilov and Hutter, 2019) with a max learning rate of 1 × 10−4 without a weight decay. For the learning rate scheduler, we conduct warmup at the first 1, 000 steps 7https://huggingface.co/TinyLlama/TinyLlama-1. 1B-Chat-v0.4 8https://huggingface.co/meta-llama/ Llama-2-7b-hf 9https://huggingface.co/TinyLlama/TinyLlama-1. 1B-intermediate-step-1431k-3T 10https://huggingface.co/microsoft/phi-2 11https://huggingface.co/meta-llama/ Llama-2-7b-chat-hf 12https://huggingface.co/lmsys/vicuna-7b-v1.5 and then keep the maximum learning rate for training all the time. The max training step is set to 100, 000, but we will stop early if the loss on the validation set does not decrease. For the audio embedding provided by the Whisper family of models, we found that not padding would affect the performance. As a result, we pad the speech to 30 seconds for all Whisper models and the batch size is set to 4. For other models, the length of the input audio remains consistent with the original length in the temporal dimension, and the batch is set to 6, which greatly improves the efficiency of training and inference, compared to Whisper models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
af340aeb-6d9f-438c-8b6f-1fda7b34309f
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 3.4 Inference Detail During inference, the data is organized in the following format: "*USER: <S> <P> ASSISTANT:*", where large language models answer autoregressively. Typically, LLMs utilize sampling algorithms to generate diverse textual outputs. Since speech recognition is a sequence-to-sequence task with deterministic outputs, we use beam search with *beam* = 4 to output the hypothesis corresponding to the speech.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4e0bf664-0ca1-4a6e-8abb-ee8322624dd6
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4 Exploration In this section, we first give a basic benchmark of combinations of different LLMs and speech encoders and find that chat models perform better than raw pre-trained LLMs on the ASR task. We next benchmark different chat models and find Vicuna to be **a suitable LLM** and fine-tuned Hu- BERT to be **a powerful speech encoder** for conducting the ASR task. Finally, we propose SLAM- ASR, and compare SLAM-ASR with state-of-theart previous NN-based ASR models and the latest best-performing LLM-based ASR models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
794bb6a7-f55f-4f12-9bac-2231a9e2a1b2
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.1 A Basic Benchmark To begin with, we benchmark Whisper models with different sizes on pre-trained LLMs and supervised fine-tuned LLMs. We pick TinyLLaMA of the 1B-magnitude and LLaMA-2 of the 7B-magnitude to make a preliminary assessment. As shown in Table 2, the performance of the ASR task improves as the speech encoder parameter size increases, but the improvement is of diminishing marginal benefit for the Whisper family of models. For the choice of LLMs, the chat models work better than the pretrained models, regardless of the size. One possible explanation is that the chat models take speech embedding as a form of "language" and perform a machine translation task, which is activated during the SFT process.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da8ec3dc-f586-4e9e-90a2-b6f8dd54163c
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.2 Exploration In Llms Next, we fix the speech encoder as Whisper-large and then explore a better large language model. As shown in Table 3, the Phi-2 chat model with 2.78B parameters has a comparable word error rate with LLaMA-2 with 6.74B parameters on test-other. Vicuna is an open-source chat LLM fine-tuned on user-shared conversational data collected from ShareGPT13, utilizing LLaMA as a pretrained LLM. The LLM-based ASR model shows better results when Vicuna is used as the LLM compared with LLaMA-2 and LLaMA-2-Chat. All the above experimental results confirm the capability of chat models on LLM-based ASR systems.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7b4f74c6-c944-4294-a35d-26053883fdc6
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders Furthermore, we fix Vicuna as the LLM and benchmark the performance of different speech encoders. For the supervised speech encoders, the performance gets better gradually as the parameter size of the speech encoder increases, which is consistent with the trend on the LLaMA series models. When the Qwen-Audio Encoder is used as the speech encoder, the ASR performance is further improved compared with Whisper-large, which indicates that the encoder fine-tuned on other LLM (i.e. Qwen- 7B) with gradient backpropagation, can be transferred to another LLM (i.e. Vicuna-7B), and maintain a certain degree of performance. For the self-supervised learning speech encoders, HuBERT Base and WavLM Base have about 95M parameters, with 768 dimensions of hidden size. In this configuration, the ASR performance is similar compared with Whisper-small with the same scale, where self-supervised learning does not play a role. When scaling the self-supervised speech encoders to 0.3B, WavLM Large outperforms all listed supervised speech encoders, including Whisper-medium with 0.3B parameters and Whisper-large with 0.6B parameters, while the improvement from HuBERT Base to HuBERT Large is not obvious. However, if the HuBERT Large encoder is first fine-tuned on Librispeech 960 hours of training data, and used as the speech encoder to train the projector in our LLM-based ASR model, the model achieves a WER of 2.30% on test-clean and 4.53% on testother, exceeding the performance with WavLM 13https://sharegpt.com Speech Encoder Pre-trained Model Chat Model TinyLLaMA LLaMA-2 TinyLLaMA-Chat LLaMA-2-Chat test-clean test-other test-clean test-other test-clean test-other test-clean test-other Whisper-tiny 12.72 21.64 16.16 25.17 9.55 21.01 8.97 18.77 Whisper-base 7.35 15.89 17.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0392a0d8-1cac-482d-b9db-3a51a9460ba0
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders Speech Encoder Pre-trained Model Chat Model TinyLLaMA LLaMA-2 TinyLLaMA-Chat LLaMA-2-Chat test-clean test-other test-clean test-other test-clean test-other test-clean test-other Whisper-tiny 12.72 21.64 16.16 25.17 9.55 21.01 8.97 18.77 Whisper-base 7.35 15.89 17.46 21.84 7.03 15.92 6.37 12.98 Whisper-small 6.61 11.81 6.41 10.88 5.94 11.5 4.51 8.94 Whisper-medium 4.65 8.95 3.35 6.10 5.01 8.67 2.71 6.37 Whisper-large 4.39 8.22 3.01 7.15 4.33 8.62 2.72 6.79 | LLM | #LLM Params | Hidden Size | #Projector Params | |-------------------|---------------|---------------|---------------------| | WER(%) | | | | | ↓ | | | | | test-clean | test-other | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f82dd964-e476-4661-bfe0-5132b163c3d7
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders | ↓ | | | | | test-clean | test-other | | | | Pre-trained Model | | | | | TinyLLaMA | 1.10B | 2048 | 17.31M | | LLaMA-2 | 6.74B | 4096 | 21.50M | | Chat Model | | | | | TinyLLaMA-Chat | 1.10B | 2048 | 17.31M | | Phi-2 | 2.78B | 2560 | 18.35M | | LLaMA-2-Chat | 6.74B |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
61e65346-ed87-442d-a390-fa6745faf98e
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders | 2048 | 17.31M | | Phi-2 | 2.78B | 2560 | 18.35M | | LLaMA-2-Chat | 6.74B | 4096 | 21.50M | | Vicuna | 6.74B | 4096 | 21.50M | | Speech Encoder | #Encoder Params | Hidden Size | #Projector Params | |---------------------------------|-------------------|---------------|---------------------| | WER(%) | | | | | ↓ | | | | | test-clean | test-other | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59f86184-78e5-4661-908f-4fdae970add1
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders | | | | | test-clean | test-other | | | | Acoustic Feature | | | | | FBank | - | 80 | 10.03M | | Supervised Speech Encoder | | | | | Whisper-tiny | 7.63M | 394 | 12.33M | | Whisper-base | 19.82M | 512 | 13.64M | | Whisper-small
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fec3d58f-ef16-4594-bbca-0a72d8dbe56a
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders 7.63M | 394 | 12.33M | | Whisper-base | 19.82M | 512 | 13.64M | | Whisper-small | 87.00M | 768 | 16.26M | | Whisper-medium | 305.68M | 1024 | 18.88M | | Whisper-large | 634.86M | 1280 | 21.50M | | + Qwen-Audio Fine-tuning | 634.86M | 1280 | 21.50M | | Self-supervised Speech Encoder | | | | | HuBERT Base | 94.70M |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0782d77c-60f7-4aac-8a51-4f278f074be1
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders | | Self-supervised Speech Encoder | | | | | HuBERT Base | 94.70M | 768 | 16.26M | | WavLM Base | 94.38M | 768 | 16.26M | | HuBERT Large | 316.61M | 1024 | 18.88M | | + LS-960 Fine-tuning | 316.61M | 1024 | 18.88M | | WavLM Large | 315.45M | 1024 | 18.88M | | HuBERT X-Large | 964.32M | 1280 | 21.50M | | + LS-960 Fine-tuning (SL
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a819be88-e315-46ea-8809-c70c1b749cbd
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders | 315.45M | 1024 | 18.88M | | HuBERT X-Large | 964.32M | 1280 | 21.50M | | + LS-960 Fine-tuning (SLAM-ASR) | 964.32M | 1280 | 21.50M | Model Speech Encoder LLM Projector ASR Data(h) WER(%) ↓ Module Learnable Module Learnable Module Learnable test-clean test-other LLM-based ASR-specific Models Yu et al.'s (2024) Whisper-large ✗ Vicuna-13B ✗ seg-QF ✓ 960 2.3 5.2 4,000+ 2.1 5.0 SLAM-ASR HuBERT X-Large ✗ Vicuna-7B ✗ Linear ✓ 960 1.9 3.8 LLM-based Audio-universal Models SALMONN (Tang et al., 2024) Whisper-large, BEATs ✗ Vicuna-13B LoRA win-QF ✓ 1960 2.1 4.9 Qwen-Audio (Chu et al., 2023) Whisper-large ✓ Qwen-7B ✗ Linear ✓ 30,000+ 2.0 4.2 Large as the speech encoder. Further, we use HuBERT X-Large as the speech encoder, which scales the speech encoder to 1B parameters. With Librispeech-960 fine-tuned HuBERT X-Large,
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b451a845-61df-4138-a812-3c78a7fe43bf
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders RA win-QF ✓ 1960 2.1 4.9 Qwen-Audio (Chu et al., 2023) Whisper-large ✓ Qwen-7B ✗ Linear ✓ 30,000+ 2.0 4.2 Large as the speech encoder. Further, we use HuBERT X-Large as the speech encoder, which scales the speech encoder to 1B parameters. With Librispeech-960 fine-tuned HuBERT X-Large, our LLM-based ASR model gets a word error rate of 1.94% on test-clean and 3.81% on test-other, achieving 24.8% and 41.1% relative WER reduction over the model with Whisper-large as the speech encoder, respectively. Additionally, inspired by Fuyu (Bavishi et al., 2024), we also try to drop the speech encoder and directly feed the 80-dimensional FBank features into the projector, which lags far behind utilizing well-trained speech encoders, as shown in the first row of Table 4. The experimental results show the effectiveness of using self-supervised speech encoders and scaling the size of speech encoders. Model WER(%) ↓ test-clean test-other Specialist Models ContextNet-large (Han et al., 2020) 2.1 4.6 + in-domain LM 1.9 4.1 Conformer-large (Gulati et al., 2020) 2.1 4.3 + in-domain LM 1.9 3.9 Branchformer-large (Peng et al., 2022) 2.4 5.5 + in-domain LM 2.1 4.5 Zipformer-large (Yao et al., 2024) 2.0 4.4 + in-domain LM 1.9 3.9 Universal Models Whisper-large-v2 (Radford et al., 2023) 2.7 5.2 OWSM-v3.1 (Peng et al., 2024) 2.4 5.0 Ours SLAM-ASR 1.9
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5bba7bfe-6ff8-4c65-9f1d-446f5fe4025a
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.3 Exploration On Speech Encoders al., 2022) 2.4 5.5 + in-domain LM 2.1 4.5 Zipformer-large (Yao et al., 2024) 2.0 4.4 + in-domain LM 1.9 3.9 Universal Models Whisper-large-v2 (Radford et al., 2023) 2.7 5.2 OWSM-v3.1 (Peng et al., 2024) 2.4 5.0 Ours SLAM-ASR 1.9 3.8
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2563c882-ceed-441d-8540-01fedbb45a17
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.4 Slam-Asr Here we introduce SLAM-ASR, a llm-based ASR model with HuBERT X-Large as the speech encoder and Vicuna-7B as the LLM, with the only trainable linear projector, implemented based on the SLAM-LLM framework. As shown in Table 5, we exhibit different LLM-based ASR models from concurrent work, either ASR-specific or audiouniversal. A contemporary work (Yu et al., 2024) employs Whisper-large as the speech encoder and Vicuna-13B as the LLM. The segment-level Q- Former (seg-QF) is utilized as the projector to tackle the compatibility between speech sequences and the LLM. Compared with their method, our SLAM-ASR yields 17.4/26.9% relative WER reductions on test-clean/other subsets trained with the same 960 hours of Librispeech data. When their model is trained on a larger amount of speech over 4, 000 hours, the proposed SLAM-ASR still performs better. We also compare SLAM-ASR with the latest LLM-based audio-universal models, SALMONN (Tang et al., 2024) and Qwen- Audio (Chu et al., 2023), which provide results on Librispeech benchmark. Compared with these audio-based multimodal LLMs, SLAM-ASR still achieves better performance despite the large margin in training data. We also compare SLAM-ASR with state-ofthe-art previous NN-based models. For specialist models trained on Librispeech-960, we compare SLAM-ASR with ContextNet (Han et al., 2020), Conformer(Gulati et al., 2020), Branchformer (Peng et al., 2022), and Zipformer (Yao et al., 2024). All models are of large size, and the results from their papers are demonstrated. These ASR models employ sophisticated system engineering, including SpecAugment and speed perturbation for data augmentation, and the exponential moving average technique for model averaging. To further improve performance, in-domain language models trained on the LibriSpeech language model corpus along with the LibriSpeech-960 transcripts are added for fusing or rescoring. SLAM- ASR achieves the same (test-clean) or better (testother) ASR performance compared with
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
046f9b85-7db4-4612-9ba1-4f85e00126c1
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 4.4 Slam-Asr ), and Zipformer (Yao et al., 2024). All models are of large size, and the results from their papers are demonstrated. These ASR models employ sophisticated system engineering, including SpecAugment and speed perturbation for data augmentation, and the exponential moving average technique for model averaging. To further improve performance, in-domain language models trained on the LibriSpeech language model corpus along with the LibriSpeech-960 transcripts are added for fusing or rescoring. SLAM- ASR achieves the same (test-clean) or better (testother) ASR performance compared with the bestperforming models without using complex system engineering. Compared with general-propose models trained on massive data, SLAM-ASR outperforms Whisper-large-v2 (Radford et al., 2023) in industry, and OWSM-v3.1 (Peng et al., 2024) in the academic community. The experimental results demonstrate the superiority of SLAM-ASR and the great potential of LLM-based ASR.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ed09719b-9bed-4679-9fa0-bbb16e0624af
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 5 Capability Emergence We observe that there is capability emergence for LLM-based ASR during training within 1 epoch (around 12k steps). Specifically, the accuracy of the next token prediction increases rapidly at the beginning of training, then starts to rise slowly, and then "spikes" at some point, as if "the ability is suddenly learned". Figure 2 demonstrates the training accuracy of the next token prediction with the training steps, where the LLM is kept as Vicuna-7B and the speech encoders vary. As can be seen from the figure, the speech encoders with better performance, in this case, Whisper Large and WavLM Large, will emerge earlier. A possible explanation is that our task is essentially to align speech representations with LLMs, while a powerful speech encoder can provide representations that are easier for the projector to align with LLMs. We keep the speech encoder as Whisper Large, change different LLMs, and plot the training accuracy, as shown in Figure 3. Experiments show that LLM-based ASR models with smaller LLMs such as TinyLLaMA-Chat and Phi-2 emerge earlier, however, they are not as effective as larger LLMs such as LLaMA-2-7B-Chat and Vicuna-7B. This shows that the larger language models are harder to align with speech features than the smaller ones. We also explore whether or not freezing the speech encoder affects capability emergence. We take TinyLLaMA-1.1B-Chat as the LLM and freeze or finetune the speech encoder, respectively. As shown in Figure 4, both models quickly rise to around 40% training accuracy in the early training process. When the speech encoder is frozen, the model completes the cross-modal alignment in 1k steps, while the time node comes to 25K steps when the speech encoder is trainable, which is much later. Table 7 compares the WER of the LLM-based ASR systems with the speech encoder freezing and fine-tuning, where the former works much better. This indicates that 1k hours of speech is still not enough to train a task-specific LLM- based speech encoder, instead, freezing the speech encoder and paying attention to the modal alignment is a better choice. | Freezing | WER(%) | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c4d9ff6e-2012-4ab9-a950-8e99d3799ee2
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 5 Capability Emergence while the time node comes to 25K steps when the speech encoder is trainable, which is much later. Table 7 compares the WER of the LLM-based ASR systems with the speech encoder freezing and fine-tuning, where the former works much better. This indicates that 1k hours of speech is still not enough to train a task-specific LLM- based speech encoder, instead, freezing the speech encoder and paying attention to the modal alignment is a better choice. | Freezing | WER(%) | |----------------|------------| | ↓ | | | Speech Encoder | test-clean | | ✓ | | | 4.33 | 8.62 | | ✗ | | | 12.79 | 22.83 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ff0b62d2-2699-4915-876d-3f7f1fe3d11e
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## 6 Conclusion In this paper, we systematically explore LLM- based ASR systems with a clean framework, where the only trainable linear projector is used to align the speech encoder and the LLM. Research indicates that LLMs that undergo supervised finetuning, exhibit improved performance and robustness. Furthermore, speech encoders that are finetuned from self-supervised models demonstrate superior capabilities. The SLAM-ASR model is proposed and outperforms other LLM-based ASR models and previous NN-based ASR models on the Librispeech benchmark. Exploratory experiments show that there is a capability emergence in LLM- based ASR systems. We aspire for our research to serve as a step forward in the exploration of LLM- based ASR, offering assistance and insights to the broader community.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
29537bb0-029f-46b5-99a5-ddc6418a99a7
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## Acknowledgements We thank Changli Tang and Wenyi Yu for their helpful discussions and feedback.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2399e651-4c38-481a-9713-e677016575e5
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## A Appendix: More Exploration A.1 Text Perplexity word error rate, while LLaMA performs the worst. LLM PPL (WER(%)) ↓ test-clean test-other LLaMA-2 53.74 (3.01) 58.78 (7.15) LLaMA-2-Chat 77.60 (2.72) 85.74 (6.79) Vicuna 76.44 (2.58) 84.95 (6.47) Word-level text perplexity (PPL) of different LLMs is measured to investigate if the better performance of Vicuna is related to domain agreement, rather than supervised fine-tuning. As shown in Table 8, we measure perplexity on test-clean and test-other subsets. Surprisingly, LLaMA-2 without SFT achieves the lowest perplexity by a large margin compared with chat models, while performing the worst on the word error rate. This proves that the better results of chat models are not due to domain agreement with the transcripts.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
760ad036-3a6d-456a-b1fb-b17726799f2e
# An Embarrassingly Simple Approach For Llm With Strong Asr Capacity ## A.2 Prompt Engineering | Type | |------------------------------------| | short prompts | | Transcribe speech to text. | | long prompts | | Transcribe speech to text. Output | | the transcription directly without | | redundant content. Ensure that the | | output is not duplicated. | We also investigate the performance of different prompts in LLM-based ASR, and the prompt examples are shown in Table 9. As shown in Table 10, when we use a short prompt, the model achieves better results compared with the model using a long prompt in a complex description. However, when we don't use any prompt (that is, a shorter prompt only with the "ASSISTANT" tag left), the performance of the model drops. This indicates that although an LLM-based ASR model is a taskspecific MLLM, the setting of the prompt is still important. A possible explanation is that the prompt lets the model optimize in the task-specific subspace through in-context learning, while too complex prompts will increase the learning difficulty and lead to a suboptimal solution. To investigate this assumption, we set a more complex prompt format. We use the same seed prompt for the ASR task in SpeechGPT (Zhang et al., 2023) to generate 10 prompts to form a prompt library. At both the training and testing stages, a random prompt is drawn from the prompt library. As shown in the last row of Table 10, there is a big drop in model performance, which is in line with our assumption. Prompt WER(%) ↓ test-clean test-other no prompts 3.19 6.97 short prompts 2.58 6.47 long prompts 2.88 6.79 randomly selected prompts 5.90 10.02
{ "creation_datetime": "2024-03-04", "file_name": "2402.08846v1.md", "file_path": "paper_data/2402.08846v1.md", "file_size": 43881, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
33cb9f5c-548c-4952-aa2a-cf39a9e42c3a
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling Zekun Li*1, Zhiyu Zoey Chen2, Mike Ross3, Patrick Huber3, Seungwhan Moon3, Zhaojiang Lin3, Luna Dong3, Adithya Sagar3, Xifeng Yan1**, and Paul A. Crook**3 1University of California, Santa Barbara 2Carnegie Mellon University 3Meta AI
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c7b7f4aa-aabf-4c58-93ee-2d48040f2490
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## Abstract Large language models (LLMs) are increasingly prevalent in conversational systems due to their advanced understanding and generative capabilities in general contexts. However, their effectiveness in task-oriented dialogues (TOD), which requires not only response generation but also effective dialogue state tracking (DST) within specific tasks and domains, remains less satisfying. In this work, we propose a novel approach FNCTOD for solving DST with LLMs through function calling. This method improves zero-shot DST, allowing adaptation to diverse domains without extensive data collection or model tuning. Our experimental results demonstrate that our approach achieves exceptional performance with both modestly sized open-source and also proprietary LLMs: with in-context prompting it enables various 7B or 13B parameter models to surpass the previous state-of-the-art (SOTA) achieved by ChatGPT, and improves ChatGPT's performance beating the SOTA by 5.6% Avg. JGA. Individual model results for GPT-3.5 and GPT-4 are boosted by 4.8% and 14%, respectively. We also show that by fine-tuning on a small collection of diverse task-oriented dialogues, we can equip modestly sized models, specifically a 13B parameter LLaMA2-Chat model, with functioncalling capabilities and DST performance comparable to ChatGPT while maintaining their chat capabilities. We will open-source experimental code and model.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7cc53b18-9165-49f1-aec7-445cda5da239
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction Recent years have seen the rapid development of large language models (LLMs) that have demonstrated exceptional natural language understanding and generation capabilities. The integration of LLMs into industry applications, particularly as conversational assistants, is a notable trend. Finetuned with conversations between users and assistants, these models are further aligned with human preferences to enhance their ability to deliver fluent, helpful, and polite responses to user inquiries. Notable examples include proprietary systems such as ChatGPT1 and Claude2, as well as open-source models such as LLaMA2-Chat (Touvron et al., 2023), Vicuna (Chiang et al., 2023), Baichuan (Baichuan, 2023). The primary focus of these chat-tuned LLMs has typically been on responding in general contexts. However, for another important type of conversation, task-oriented dialogues (TOD), the model is required to extract the intentions of users at each turn of the conversation, represented as slot-value pairs of per-domain predefined schemas; a process known as Dialogue State Tracking (DST). The challenge lies in the model's ability to accurately summarize user needs over multiple turns of conversation and also strictly adhere to a domain-specific ontology. The most direct solutions (Hosseini-Asl | | | | | | | | Zero-shot DST Paradigms | Base Model | Fine-tuning | Prompting | Plug-&-Play | |---------------------------------|-------------|----|-------|----|--------------|----|---------------------------|--------------|---------------------------|-------------|---------------| | Domain transfer approaches ( | Lin et al. | , | 2021b | , | c | ; | Zhao et al. | , | 2022a
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
df95c6af-be37-4661-92a0-7a6465f3a69f
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction | Prompting | Plug-&-Play | |---------------------------------|-------------|----|-------|----|--------------|----|---------------------------|--------------|---------------------------|-------------|---------------| | Domain transfer approaches ( | Lin et al. | , | 2021b | , | c | ; | Zhao et al. | , | 2022a | ) | Small LMs | | ✓ | ✗ | ✗ | | | | | | | | | | | Previous prompting approaches ( | Heck et al. | , | 2023 | ; | Chung et al. | , | 2023 | ) | Advanced proprietary LLMs | | | | ✗ | ✓ | ✗ | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d70b6420-97e8-4b59-b664-6f93af412970
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction | ) | Advanced proprietary LLMs | | | | ✗ | ✓ | ✗ | | | | | | | | | | | F | | | | | | | | | | | | | N | | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dc152cf1-530d-41c3-ba54-85de18478305
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction | | | N | | | | | | | | | | | | | CTOD ( | | | | | | | | | | | | | Ours | | | | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
46ccb00a-3e80-4980-9cf6-5f783e645fa5
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction | | | | | | | | | | | | | ) | | | | | | | | | | | | | Modestly-sized open-source LLMs | | | | | | | | | | | | | & Advanced proprietary LLMs |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
743d9ddf-e97d-4b48-b55e-b4188900d32b
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction | | | | | | | | & Advanced proprietary LLMs | | | | | | | | | | | | | ✓ | ✓ | ✓ | | | | | | | | | | et al., 2020; Peng et al., 2020; Su et al., 2021) necessitate training on curated domain-specific annotated data, a process that is notoriously costly and laborintensive. Despite efforts in automated dataset creation using GPT-3 (Li et al., 2022), these methods struggle to generalize to unseen domains. To achieve zero-shot DST
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7339239-34ea-46b3-85fe-22233bb2a1b9
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction | | | et al., 2020; Peng et al., 2020; Su et al., 2021) necessitate training on curated domain-specific annotated data, a process that is notoriously costly and laborintensive. Despite efforts in automated dataset creation using GPT-3 (Li et al., 2022), these methods struggle to generalize to unseen domains. To achieve zero-shot DST for unseen domains, prior approaches usually involved domain transfer methods (Campagna et al., 2020; Lin et al., 2021a; Zhao et al., 2022b). However, these still require training on data from similar domains, and their performance has been far from satisfactory. LLMs exhibit remarkable capabilities for tackling various tasks without the need for task-specific fine-tuning, making them suited for zero-shot DST. However, while there have been initiatives to leverage ChatGPT for zero-shot DST (Hu et al., 2022; Hudeˇcek and Dušek, 2023; Heck et al., 2023; Chung et al., 2023), these methods tend to treat DST as a standalone task rather than chat completion, which the models, especially chat-tuned models, are more proficient in. They usually take the whole conversation as input along with detailed instructions to generate in domain-specific formats. This setup poses challenges due to the long task context and specific output requirements. Consequently, this works exclusively with advanced Chat- GPT or Codex models but fails with less powerful LLMs (Hudeˇcek and Dušek, 2023). In this work, we introduce a novel approach FNCTOD, to address zero-shot DST with LLMs. Our method seamlessly integrates DST as a part of the assistant's output during chat completion. Specifically, we treat the schema of each taskoriented dialogue domain as a specific function, and DST for this domain as the process of "calling" the corresponding function. We thus instruct LLMs to generate function calls along with the response in the assistant's output. To achieve this, we convert the domain schema into function specifications, which include the function's description and required arguments, and incorporate them into the *system prompt* of
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
430df101-6c76-4c0a-9003-4b13c182d514
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction , we introduce a novel approach FNCTOD, to address zero-shot DST with LLMs. Our method seamlessly integrates DST as a part of the assistant's output during chat completion. Specifically, we treat the schema of each taskoriented dialogue domain as a specific function, and DST for this domain as the process of "calling" the corresponding function. We thus instruct LLMs to generate function calls along with the response in the assistant's output. To achieve this, we convert the domain schema into function specifications, which include the function's description and required arguments, and incorporate them into the *system prompt* of the LLM. Additionally, we integrate these function calls into the assistant's output within the *dialogue context*. As shown in Figure 1, experimental results on the MultiWOZ benchmark (Budzianowski et al., 2018) represent a significant milestone. Our approach is the first that, without further fine-tuning, enables modestly sized open-source LLMs (7B or 13B parameters) to achieve comparable or superior performance compared to previous state-of-the-art (SOTA) prompting methods that relied exclusively on advanced proprietary LLMs such as ChatGPT and Codex (Hudeˇcek and Dušek, 2023; Heck et al., 2023; Chung et al., 2023). Furthermore, our approach beats the previous zero-shot SOTA by 5.6% Av. JGA, firmly establishing a new standard. It improves ChatGPT performance; beating previous individual best results for GPT-3.5 and GPT-4 by 4.8% and 14%, respectively. Additionally, we show that by fine-tuning a 13B LLAMA2-CHAT model using a collection of 7,200 task-oriented dialogues - consisting of 200 randomly selected dialogues covering 36 diverse domains, from heterogeneous TOD datasets - we can equip it with function-calling DST abilities comparable to ChatGPT while still maintaining its response generation capabilities. The comparison with prior studies is summarized in Table 1 and Figure 1. Our contribution is threefold: (1) Demonstration that the FnCTOD approach achieves outstanding performance with both open-source and proprietary LLMs through in-context prompting: enables open-source 7–13B models to surpass the previous SOTA achieved by ChatGPT, and enhances GPT-4's performance
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2b278aae-27f9-48f9-a1ea-add661a8d6ba
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 1 Introduction dialogues - consisting of 200 randomly selected dialogues covering 36 diverse domains, from heterogeneous TOD datasets - we can equip it with function-calling DST abilities comparable to ChatGPT while still maintaining its response generation capabilities. The comparison with prior studies is summarized in Table 1 and Figure 1. Our contribution is threefold: (1) Demonstration that the FnCTOD approach achieves outstanding performance with both open-source and proprietary LLMs through in-context prompting: enables open-source 7–13B models to surpass the previous SOTA achieved by ChatGPT, and enhances GPT-4's performance by 14%, establishing a new SOTA. **(2)** Bridging the *zero-shot* DST performance gap between opensource models and ChatGPT by fine-tuning on a small collection of diverse dialogues. **(3)** Showing that function calling DST capabilities can be integrated into existing chat-tuned LLMs while preserving response capabilities.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
98cffb5f-7f5d-47ab-9072-e41f5191f4ec
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 2 Related Work 2.1 Dialogue State Tracking DST is an essential, yet challenging task in the construction of TOD systems. Its primary purpose is to extract and track user goals at each turn throughout the conversation. The tracked dialogue state is usually represented in the slot values of the predefined schema for specific domains. This requires the slot values to adhere closely to the domain-specific schema. Consequently, previous methods have relied on the collection and annotation of domainspecific dialogues for model training (Lee et al., 2019; Wu et al., 2019; Heck et al., 2020; Hosseini- Asl et al., 2020; Peng et al., 2020; Lin et al., 2020) However, obtaining training data is notoriously expensive, even with methods that utilize GPT-3 to automatically simulate such data (Li et al., 2022). Furthermore, these approaches are limited to handling only the domains covered in the training data. To address zero-shot DST in unseen domains, previous cross-domain transfer strategies based on small models typically leverage extra dialogue corpora in similar domains (Wu et al., 2020; Lin et al., 2021b; Su et al., 2021) or redefining DST in terms of other types of tasks, such as question answering (Lin et al., 2021c) or summarization (Shin et al., 2022) to find appropriate additional training data. Despite these efforts, their overall zero-shot performance remains relatively low.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2a3bfb41-75bf-4394-919f-2e1fbb7ad407
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 2.2 Leveraging Llms For Dialogue Tasks LLMs (Brown et al., 2020; Chowdhery et al., 2023; OpenAI, 2023) have demonstrated remarkable capabilities in handling various tasks without requiring further tuning. Recent chat/instruction-tuned models further exhibit impressive performance in conversational contexts (Touvron et al., 2023; Chiang et al., 2023; Yang et al., 2023). However, current chat models primarily focus on general conversation, often omitting task-oriented dialogues (TOD). TOD differs from general conversation in that it requires models to not only generate responses but also track dialogue states according to domain-specific schemas. While ChatGPT has shown effectiveness in response generation within TOD (Li et al., 2023c), their performance of zero-shot DST, as explored in recent research on prompting approaches (Hu et al., 2022; Bang et al., 2023; Hudeˇcek and Dušek, 2023; Heck et al., 2023; Zhang et al., 2023; Chung et al., 2023), are still not satisfying, which remains a significant challenge.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0c440168-6217-4df1-8259-569b93a756f6
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 2.3 Tool Usage Within Llms Early work on tool usage (Parisi et al., 2022; Schick et al., 2023) and the recent launch of GPT- 4 plug-in and function calling features (OpenAI, 2023), have highlighted the importance of function calling for LLMs, encouraging follow-up work (Patil et al., 2023; Shen et al., 2023; Li et al., 2023a). Commonly integrated tools include web browsers, calculators (Cobbe et al., 2021), translation systems, and so on. We are the first to utilize this tool usage/function calling capability to solve the challenging DST task in TOD with LLMs, bridging the gap between general conversation and task-oriented dialogues.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8dbfc34c-ea44-4922-974c-292c67842d00
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 3 Background 3.1 Chat-Tuned Llms Chat-tuned LLMs are models specifically finetuned to interact with users in a conversational manner. This category encompasses proprietary models such as ChatGPT and Claude, as well as open-source models such as Vicuna (Chiang et al., 2023), LLaMA2-Chat (Touvron et al., 2023), and Baichuan (Yang et al., 2023). These chat models typically start as base models that are further finetuned with a dialogue format, enabling them to function effectively as conversational agents. As depicted in Figure 2, the dialogue format typically features three distinct roles within two components: (1) the *system* role in the *system prompt* section, which defines the assistant's roles, responsibilities, and expected behaviors; and (2) the *user* and assistant roles in the *dialogue context* section, encompassing their conversation. The model is typically tasked to produce the *assistant*'s responses to the user's input. These chat models are primarily designed to generate helpful, detailed, and friendly responses to general user inquiries, rather than handling task-specific conversations as in TOD.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0613f7f9-0bca-468a-8dc9-3b04c8d970dc
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 3.2 Dst Task Formulation In TOD, at each turn of conversation, the task of DST is to summarize the dialogue state St given the dialogue context Ct = {A1, U1, · · · , At, Ut}, where Ut and At represent the user utterance and assistant response at the t-th turn. For simplicity, we will omit the turn index t in subsequent discussions. The dialogue state S is a set of slot-value pairs: $$S=\{(s_{1,D_{1}},v_{1,D_{1}}),\cdots,(s_{i,D_{j}},v_{i,D_{j}})\},\tag{1}$$
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
561f2181-9451-4eba-a3dc-f79a1683de65
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## System Prompt You are a task-oriented assistant. You can use the given functions to fetch further data to help the users. <Functions> { "**name**": "find_book_hotel", "**description**": "hotel reservations and vacation stays. ", "**arguments**": [ ] } </Functions> <Examples> … </Examples> where si,Dj is the i-th slot in the Dj domain, and vi,Dj is its tracked value. Each domain Dj corresponds to a set of slots for a specific service, API call, or database query, such as restaurant reservations. In the case of the *restaurant* domain, the slots might include "restaurant-food", "restaurantarea", "restaurant-pricerange", etc. We use SDj to denote the tracked slots for domain Dj. 4 Approach Our method redefines DST as function calling, treating each domain as a distinct function, and the slot values within the domain as its arguments. As shown in Figure 2, this paradigm is represented in chat-tuned models by embedding function specifications within system prompts, as shown in Figure 3. The model is tasked with generating function calls followed by a response, as shown in Figure 4. We provide more details of our approach below. DST as Function Calling In our formalization, DST is conceptualized as function calling. Each domain Dj is modeled as a unique function Fj, with the associated slot values serving as arguments. Consequently, at each turn of the conversation, the DST task transforms into identifying the correct function Fj and its arguments SDj: <fn_call> Fj(s1,D1 = v1,D1, s2,D1 = v2,D1, · · · , si,Dj = vi,Dj) </fn_call>, (2) where "<fn_call>" and "</fn_call>" are special tokens. In practice, we use "<function_call>" and "</function_call>" to represent them and generate the function call in JSON format. Some examples of function calls generated within a conversation are shown in Figure 4. Dialogue Prompt Format As shown in Figure 4, we incorporate this function calling as an integral part of the conversation. At each turn of the conversation,
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cfea3020-2449-4ea4-b63d-076b2b315598
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## System Prompt ,D1 = v2,D1, · · · , si,Dj = vi,Dj) </fn_call>, (2) where "<fn_call>" and "</fn_call>" are special tokens. In practice, we use "<function_call>" and "</function_call>" to represent them and generate the function call in JSON format. Some examples of function calls generated within a conversation are shown in Figure 4. Dialogue Prompt Format As shown in Figure 4, we incorporate this function calling as an integral part of the conversation. At each turn of the conversation, given the current conversation context, the chat model is tasked with first generating the function call, followed by the response. To achieve that, we convert the domain schema into function specifications, using JSON, and include them in the *system prompt* within the dialogue prompt, as
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fdf35968-975e-4aed-950e-e1bba0c06b20
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## Dialogue Context User: i would like information about museums in cambridge . Assistant: <function_call> {"function": "find_attraction", "arguments": {"type": "museum"}} </function_call> there are 23 museums in cambridge . what part of town will you be in ? User: can you tell me about museums in the centre of town ? Assistant: <function_call> {"function": "find_attraction", "arguments": {"type": "museum", "area": "centre"}} </function_call> i have the castle galleries museum in the centre of town . phone number is 01223307402 . User: i also want to find the hotel called archway house . Assistant: <function_call> {"function": "find_book_hotel", "arguments": {"name": "archway house"}} </function_call> it is located in the north , on 52 gilbert road , cb43pe . would you like to book a room ? User: yes , please book for 4 nights on friday for 4 people . Assistant: <function_call> {"function": "find_book_hotel", "arguments": {"name": "archway house", "area": "north", "people": "4", "day": "friday", "stay": "4"}} </function_call> all set , the reference number is 6s9gcfdo , can i help with anything else ? User: that is all . thank you so much for your help . Assistant: have a great day ! goodbye ! shown in Figure 3. By doing so, the model can be aware of the supported functions and the correct generation format to call the function. Function Call Decomposition As outlined, the model is required to predict not just which function to call (*i.e.*, function name) but also generate arguments for the predicted functions. To streamline this process and enhance control, we split it into two consecutive steps: Function Selection and **Argument Generation**. As shown in Figure 2, for each turn of the conversation, the model first selects a function Fj from the supported functions. In this step, we only include the function descriptions in the *system prompt* and
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a7a6a6e-d4fa-4ea1-944c-d7fbddb8d053
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## Dialogue Context aware of the supported functions and the correct generation format to call the function. Function Call Decomposition As outlined, the model is required to predict not just which function to call (*i.e.*, function name) but also generate arguments for the predicted functions. To streamline this process and enhance control, we split it into two consecutive steps: Function Selection and **Argument Generation**. As shown in Figure 2, for each turn of the conversation, the model first selects a function Fj from the supported functions. In this step, we only include the function descriptions in the *system prompt* and prompt the model to generate only the selected domain/function, surrounded by the special tokens "<domain>" and "</domain>". Subsequently, we include the full specification of the chosen function Fj in the system prompt, prompting the model to generate the corresponding arguments for the function Fj. In-context Prompting Since the current opensource models are not specifically fine-tuned to generate function calls, there is no guarantee that the model could always generate the correct formats. To address that, we also include in-context example conversations as shown in Figure 4, along with the specification of the predicted function in the system prompt. We manually selected a few demonstration examples for each supported domain. Model Fine-tuning To equip open source models with function calling capabilities without needing demonstration examples, we fine-tune a LLAMA2- 13B-CHAT model using a collection of heterogeneous task-oriented dialogue datasets, including WOZ (Mrkši´c et al., 2016), CamRest676 (Wen et al., 2016b,a), MSR-E2E (Li et al., 2018), TaskMaster (Byrne et al., 2019) and Schema- Guided Dialogues (SGD) (Rastogi et al., 2020). Note that we deliberately exclude the whole target test dataset. From these datasets, we choose 36 distinct domains/functions with high-quality annotations. Instead of using all the data in those datasets, we randomly sampled 200 dialogues from each domain across the datasets, totaling 7,200 dialogues for training. This sample size has already proved sufficient for effective results. During training, we incorporate the specifications of all functions invoked in each conversation's system prompt. Our loss calculation focused solely on the function calling aspect of the assistant's generation. We
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f484a6ad-fe17-4f22-b8ba-c92f281477db
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## Dialogue Context ided Dialogues (SGD) (Rastogi et al., 2020). Note that we deliberately exclude the whole target test dataset. From these datasets, we choose 36 distinct domains/functions with high-quality annotations. Instead of using all the data in those datasets, we randomly sampled 200 dialogues from each domain across the datasets, totaling 7,200 dialogues for training. This sample size has already proved sufficient for effective results. During training, we incorporate the specifications of all functions invoked in each conversation's system prompt. Our loss calculation focused solely on the function calling aspect of the assistant's generation. We refrained from fine-tuning the response generation component, in consideration of the LLMs' existing competence in producing coherent responses and the scarcity of function-calling examples in our dataset. The fine-tuned model is dubbed FNCTOD-LLAMA2-13B.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9a25a1d3-7b09-4669-b8a5-a982cff4ea37
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5 Experiments 5.1 Experimental Setup Dataset and Metrics We evaluate on the widelyused task-oriented multi-domain dataset Multi- WOZ 2.1 (Budzianowski et al., 2018; Eric et al., 2020). We used the 1,000 dialogues in the test split and measured joint goal accuracy (JGA), which measures the percentage of turns for which all slot values are correctly predicted. This test set spans 5 domains, with each conversation potentially covering multiple domains. Baselines We compare our approach with two distinct approaches: (1) Cross-domain transfer approaches, which involve training on MultiWOZ with one domain excluded and then evaluating on the held-out domain. This category includes methods including TRADE (Wu et al., 2019), MA-DST (Kumar et al., 2020), TransferQA (Lin et al., 2021b), T5DST (Lin et al., 2021c), and Model Attraction Hotel Restaurant Taxi Train JGA JGA Slot-F1 JGA Slot-F1 JGA Slot-F1 JGA Slot-F1 JGA Slot-F1 Average Overall Cross-domain Transfer approaches TRADE (Wu et al., 2019) 20.06 – 14.20 – 12.59 – 59.21 – 22.39 – 25.69 – MA-DST (Kumar et al., 2020) 22.46 – 16.28 – 13.56 – 59.27 – 22.76 – 26.87 – TransferQA (Lin et al., 2021b) 31.25 – 22.72 – 26.28 – 61.87 – 36.72 – 35.77 – T5DST (Lin et al., 2021c) 33.09 – 21.21 – 21.65 – 64.62 – 35.43 – 35.20 – D3ST (Zhao et al., 2022a) 56.40 – 21.80 – 38.20 – 78.40 – 38.70 – 46.70 – Previous
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fc719b3f-7de8-4209-9ee3-c73904175ae1
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5 Experiments 5.1 Experimental Setup .72 – 26.28 – 61.87 – 36.72 – 35.77 – T5DST (Lin et al., 2021c) 33.09 – 21.21 – 21.65 – 64.62 – 35.43 – 35.20 – D3ST (Zhao et al., 2022a) 56.40 – 21.80 – 38.20 – 78.40 – 38.70 – 46.70 – Previous Prompting approaches *IC-DST (Codex) 60.00 – 46.70 – 57.30 – 71.40 – 49.40 – 56.96 – Heck et al. (2023) (GPT-3.5) 52.70 – 42.00 – 55.80 – 70.90 – 60.80 – 56.44 31.50 InstructTODS (GPT-3.5) 30.23 65.38 26.77 76.28 48.28 82.90 56.22 75.33 53.75 83.64 42.02 – InstructTODS (GPT-4) 39.53 78.99 31.23 84.07 55.86 88.23 63.24 82.71 59.83 89.72 48.16 – Our approach FNCTOD ChatGPT (GPT-3.5) 67.15 87.20 37.56 82.86 60.12 90.21 74.43 86.90 67.29 92.48 61.31 38.56 ChatGPT (GPT-4) 58.77 81.84 45.15 85.07 63.18 91.06 76.39 87.73 69.48 90.16 62.59 38.71 FNCTOD-LLAMA2-13B 62.24 84.99 46.83 85.39 60.27 88.69 67.48
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f0c4192b-af48-429b-8a49-7fd25a4b7e4b
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5 Experiments 5.1 Experimental Setup 60.12 90.21 74.43 86.90 67.29 92.48 61.31 38.56 ChatGPT (GPT-4) 58.77 81.84 45.15 85.07 63.18 91.06 76.39 87.73 69.48 90.16 62.59 38.71 FNCTOD-LLAMA2-13B 62.24 84.99 46.83 85.39 60.27 88.69 67.48 80.39 60.90 89.88 59.54 37.67 ZEPHYR-7B-BETA 56.50 81.97 38.43 79.52 63.18 91.19 74.10 86.56 56.20 90.00 57.68 32.11 VICUNA-7B-V1.5 50.66 74.93 35.00 73.66 52.76 85.25 67.02 80.36 59.66 89.05 53.02 29.45 VICUNA-13B-V1.5 54.25 80.99 38.43 79.96 56.44 87.26 69.11 83.37 58.82 89.26 55.41 31.84 BAICHUAN2-13B-CHAT 53.67 79.57 40.15 81.36 59.02 87.82 69.31 81.95 60.67 89.45 56.56 33.21 LLAMA2-7B-CHAT 42.64 70.18 30.47 69.37 37.60 78.63 63.20 73.80 44.17 82.18 43.44 16.78 LLAMA2-13B-CHAT 49.76 76.80 29.50 67.60 48.87 81.33 64.66 68.97 53.59 85.09 49.28 25.68 LLAMA2-70B-CHAT
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59301af4-4789-4240-9849-25bf185d94d8
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5 Experiments 5.1 Experimental Setup 33.21 LLAMA2-7B-CHAT 42.64 70.18 30.47 69.37 37.60 78.63 63.20 73.80 44.17 82.18 43.44 16.78 LLAMA2-13B-CHAT 49.76 76.80 29.50 67.60 48.87 81.33 64.66 68.97 53.59 85.09 49.28 25.68 LLAMA2-70B-CHAT 50.66 78.26 34.03 76.61 54.48 86.18 66.10 72.60 56.53 87.39 52.36 28.38 domains (**Average JGA**), and also multi-domain JGA (**Overall JGA**). The baseline results are directly taken from their respective works. The best performances in each column are in **bold**. D3ST (Zhao et al., 2022a). (2) Previous prompting approaches that have only shown efficacy with advanced ChatGPT and Codex. These include IC- DST (Hu et al., 2022) using Codex, (Heck et al., 2023) and InstructTODS (Chung et al., 2023) using ChatGPT (GPT-3.5/4). Evaluated Models We evaluate our method on proprietary ChatGPT and various open-source models. For ChatGPT, we evaluated the versions of GPT-3.5-Turbo (gpt-3.5-turbo-1106) and GPT- 4 (gpt-4-1106-preview), both of which are already equipped with function calling capabilities. Regarding open-source models, we assessed several widely recognized chat-tuned models of varying sizes, including the 7B parameter model ZEPHYR-7B-BETA (Tunstall et al., 2023), the 7B and 13B versions of VICUNA-V1.5 (Chiang et al., 2023), the 7B, 13B, and 70B versions of LLAMA2-CHAT (Touvron et al., 2023), as well as the 13B parameter model BAICHUAN2-13
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3e69bafe-d0c3-4ae6-afd3-9f9d3d97b664
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5 Experiments 5.1 Experimental Setup ), both of which are already equipped with function calling capabilities. Regarding open-source models, we assessed several widely recognized chat-tuned models of varying sizes, including the 7B parameter model ZEPHYR-7B-BETA (Tunstall et al., 2023), the 7B and 13B versions of VICUNA-V1.5 (Chiang et al., 2023), the 7B, 13B, and 70B versions of LLAMA2-CHAT (Touvron et al., 2023), as well as the 13B parameter model BAICHUAN2-13B- CHAT (Baichuan, 2023). Detailed descriptions and Huggingface model paths of these models can be found in the Appendix. Additionally, we evaluate our fine-tuned model FNCTOD-LLAMA2-13B. It's worth noting that unlike these domain transfer baselines, our model is trained exclusively on 7,200 dialogues from datasets other than MultiWOZ, making the setup more realistic and challenging. Inference Details For both ChatGPT and our fine-tuned FNCTOD-LLAMA2-13B, which have been equipped with function-calling capabilities, we could perform zero-shot prompting, excluding in-context examples in the system prompt. For the other open-source models, we perform few-shot prompting using five examples (5-shot) by default. It's worth noting that the shot in zero/few-shot prompting refers to the number of in-context examples used when prompting the models, whereas the shot in zero-shot DST refers to the number of in-domain examples seen in the training data.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fe8a193b-4422-4257-a661-cb803799870f
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.2 Zero-Shot Dst Evaluation Table 2 presents the zero-shot DST performance comparison, with observations summarized below. Our approach empowers moderately-sized opensource models to surpass previous SOTA results achieved with advanced ChatGPT. Previous prompting approaches showed promising results exclusively with advanced proprietary models but underperformed with less advanced models (Hudeˇcek and Dušek, 2023). Our approach is the first to enable moderately sized open-source models to achieve comparable or superior performance compared to previous SOTA results obtained with advanced ChatGPT and Codex. Specifically, the 7B parameter ZEPHYR-7B-BETA and 13B parameter BAICHUAN2-13B-CHAT models outperform the previous SOTA. This significant advancement marks a milestone in the practical application of LLMs for DST and TOD. Our approach significantly improves Chat- GPT's performance over previous prompting approaches. The the efficacy of our approach is demonstrated by improvements of 4.8% (Average JGA) for GPT-3.5, and 14% for GPT-4, compared to previous reported results with each of these models. Our result with GPT-4 beats the previous SOTA prompting approach using Codex by 5.6% Avergage JGA. Our fine-tuned 13B parameter model matches the performance of ChatGPT. It is evident that our fine-tuned FNCTOD-LLAMA2-13B significantly improves over its base model LLAMA2- 13B-CHAT and achieves a performance comparable to ChatGPT. This demonstrates that we can easily equip moderately sized open-source LLMs with function-calling capabilities and zero-shot DST performance comparable to ChatGPT, marking an exciting advance in bridging the gap between opensource and proprietary models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ce1952f6-1dbf-45b5-9c23-a6fd2e541ab9
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.3 Zero-Shot End-To-End Tod Evaluation In practical settings, a TOD system queries a knowledge base or API using the tracked dialogue states to ground responses. We perform an end-to-end evaluation of both DST and response generation, which is a more realistic and challenging setting. Our FNCTOD approach enables the generation of both dialogue states, *i.e.*, function calls, *and* responses in the assistant's output. This contrasts with the prompting methods that typically treat DST as a standalone task. Consistent with the previous work on end-to-end zero-shot TOD evaluation (Hudeˇcek and Dušek, 2023), we evaluated using the MultiWOZ 2.2 dataset (Zang et al., 2020) with delexicalized responses. Our evaluation metrics include JGA for DST and *Success* rate for the generated response. Success measures the percentage of dialogues in which the user's goals were fully met. The results are presented in Table 3. | Model | JGA | Success | |--------------------|--------|-----------| | ChatGPT ( | | | | Hudeˇcek and Dušek | | | | , | 2023 | ) | | F | | | | N | | | | CTOD-LL | | | | A | | | | MA2-13B
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1bb43eed-3570-4756-8e00-4cbb75bb1bb1
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.3 Zero-Shot End-To-End Tod Evaluation | N | | | | CTOD-LL | | | | A | | | | MA2-13B | | | | 37.9 | | | | 44.4 | | | | Z | | | | EPHYR | | | | -7B-B | | | | ETA | | | | 32.3 | | | | 57.5 | | | | V | | | | ICUNA
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
286c983a-9d79-485f-8641-0f3db71aca4d
# Large Language Models As Zero-Shot Dialogue State Tracker Through Function Calling ## 5.3 Zero-Shot End-To-End Tod Evaluation | | 32.3 | | | | 57.5 | | | | V | | | | ICUNA | | | | -7B- | | | | V | | | | 1.5 | 29.4 | 37.7 | | V | | | | ICUNA | | | | -13B- | | | | V | | | | 1.5 | 33.8 | 23.1 | | B | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10466v1.md", "file_path": "paper_data/2402.10466v1.md", "file_size": 62410, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }