doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
4aeb678c-b0f0-446a-a360-5107f5dc565a
## 4.4. Failure Of Scaling Law | | | CaveBFS | | | ↓ | | | Small | | | < | | | 10B | | | Llama2-7B-chat | 0.08 | | (-0.18) | | | 0.39 | | | (-0.19) | | | 0.65 | | | (+0.05) | | | 0.11 | | | (+0.04)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
65ef6853-14aa-43e7-a62b-124d559721b1
## 4.4. Failure Of Scaling Law | | | (+0.05) | | | 0.11 | | | (+0.04) | | | 0.38 | | | (-0.12) | | | 0.58 | | | (-0.18) | | | Vicuna-7B-v1.5-16K | 0.02 | | (-0.44) | | | 0.37 | | | (-0.28) | | | 0.68 | | | (-0.16) |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6b3494c7-ce8e-49e2-ae8e-b7e8ff06e602
## 4.4. Failure Of Scaling Law | | | (-0.28) | | | 0.68 | | | (-0.16) | | | 0.02 | | | (-0.98) | | | 0.39 | | | (-0.15) | | | 0.68 | | | (-0.04) | | | Mistral-7B-Instruct-v02 | 0.01 | | (-0.05) | | | 0.14 | | | (-0.35) | | | 0.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fca1370c-4349-4700-abc3-8cd5d7cca119
## 4.4. Failure Of Scaling Law 0.01 | | (-0.05) | | | 0.14 | | | (-0.35) | | | 0.39 | | | (+0.15) | | | 0.01 | | | (-0.06) | | | 0.18 | | | (-0.31) | | | 0.48 | | | (+0.21) | | | DeepSeek-LLM-7B | 0.04 | | (-0.39) | | | 0.16
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9e6c2172-74ad-40f4-b47b-ae013e5a7ec9
## 4.4. Failure Of Scaling Law | | (+0.21) | | | DeepSeek-LLM-7B | 0.04 | | (-0.39) | | | 0.16 | | | (-0.18) | | | 0.61 | | | (+0.09) | | | 0.04 | | | (-0.35) | | | 0.19 | | | (-0.39) | | | 0.62 | | | (-0.15) | | | DeepSeek-MoE-
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
135b54d9-28fb-4b52-a98f-54c6833a3838
## 4.4. Failure Of Scaling Law | | (-0.39) | | | 0.62 | | | (-0.15) | | | DeepSeek-MoE-16B | 0.02 | | (-0.98) | | | 0.14 | | | (-0.49) | | | 0.86 | | | (-0.02) | | | 0.02 | | | (-0.98) | | | 0.13 | | | (-0.58) | | | 0.87
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e4f179dd-8054-4103-b421-0303ac27852b
## 4.4. Failure Of Scaling Law 0.98) | | | 0.13 | | | (-0.58) | | | 0.87 | | | (-0.02) | | | 10B | | | ≤ | | | Medium | | | < | | | 50B | | | Llama2-13B-chat | 0.06 | | (+0.05) | | | 0.50 | | | (+0.16)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1ad7d495-be1c-48f2-a456-3fce71ad6caa
## 4.4. Failure Of Scaling Law Llama2-13B-chat | 0.06 | | (+0.05) | | | 0.50 | | | (+0.16) | | | 0.57 | | | (-0.08) | | | 0.05 | | | (-0.14) | | | 0.48 | | | (+0.10) | | | 0.56 | | | (+0.01) | | | Vicuna-13B-v1.5-16K | 0.12 | | (-0.27) |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3de324fe-00d1-4a08-b6eb-93a5384ce98a
## 4.4. Failure Of Scaling Law | | | (+0.01) | | | Vicuna-13B-v1.5-16K | 0.12 | | (-0.27) | | | 0.16 | | | (-0.50) | | | 0.23 | | | (-0.58) | | | 0.13 | | | (-0.87) | | | 0.15 | | | (-0.41) | | | 0.27 | | | (-0.37) |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1a8861f2-c36a-420b-9ca2-e14f57dda990
## 4.4. Failure Of Scaling Law | | | (-0.41) | | | 0.27 | | | (-0.37) | | | Mixtral-8x7B-Instruct-v01 | | | 0.00 | | | (-0.00) | | | 0.20 | | | (-0.27) | | | 0.48 | | | (+0.34) | | | 0.00 | | | (-0.00) | | | 0.17 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
287b342c-080d-4b9b-a696-2d2b3c28f89d
## 4.4. Failure Of Scaling Law | | | 0.00 | | | (-0.00) | | | 0.17 | | | (-0.15) | | | 0.39 | | | (+0.24) | | | Large | | | ≥ | | | 50B | | | Llama2-70B-chat | 0.07 | | (-0.04) | | | 0.14 | | | (-0.19) | | | 0.46
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ab185da7-9a12-4253-9659-67208a67f9cd
## 4.4. Failure Of Scaling Law 0.07 | | (-0.04) | | | 0.14 | | | (-0.19) | | | 0.46 | | | (+0.18) | | | 0.09 | | | (+0.09) | | | 0.20 | | | (-0.15) | | | 0.60 | | | (+0.30) | | | DeepSeek-LLM-67B | | | 0.00 | | | (-0.12)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2d4743e6-c445-49ba-a384-188d27d949ad
## 4.4. Failure Of Scaling Law | | (+0.30) | | | DeepSeek-LLM-67B | | | 0.00 | | | (-0.12) | | | 0.18 | | | (-0.22) | | | 0.36 | | | (-0.09) | | | 0.00 | | | (-0.36) | | | 0.18 | | | (-0.10) | | | 0.39 | | | (+0.01)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2d37786c-1705-4ba4-8b57-3b4beca8f5b4
## 4.4. Failure Of Scaling Law | | 0.18 | | | (-0.10) | | | 0.39 | | | (+0.01) | | | Closed-source | | | GPT-3.5-Turbo | | | 0.00 | | | (-0.00) | | | 0.36 | | | (+0.01) | | | 0.12 | | | (+0.01) | | | 0.02 | | | (+0.02)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
05a59e71-1213-4bed-832f-c969aa8d27d4
## 4.4. Failure Of Scaling Law | | | (+0.01) | | | 0.02 | | | (+0.02) | | | 0.19 | | | (-0.01) | | | 0.27 | | | (-0.00) | | | GPT-4-Turbo | | | 0.00 | | | (-0.00) | | | 0.02 | | | (-0.01) | | | 0.00
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
632f89f5-c355-471f-b6c3-564f79941f9d
## 4.4. Failure Of Scaling Law | | | 0.02 | | | (-0.01) | | | 0.00 | | | (-0.00) | | | 0.00 | | | (-0.00) | | | 0.23 | | | (-0.00) | | | 0.11 | | | (-0.01) | | | Gemini-Pro | | | 0.00 | | | (+0.00)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b09a0a1e-75a4-4656-9bb5-14d55988862e
## 4.4. Failure Of Scaling Law | | | Gemini-Pro | | | 0.00 | | | (+0.00) | | | 0.02 | | | (-0.23) | | | 0.03 | | | (-0.03) | | | 0.00 | | | (+0.00) | | | 0.04 | | | (-0.18) | | | 0.05 | | | (-0.05)
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2b5c8476-793c-4365-b87e-9b1deb86639f
## 4.4. Failure Of Scaling Law | | | (-0.18) | | | 0.05 | | | (-0.05) | | that, contrary to the expected improvement with increased model size - a trend typically observed in existing LLM benchmarks - the performance in tasks like GuessNum, DFS, and their embodied variants does not consistently correlate with larger model sizes. Notably, certain models exhibit an inverse scaling effect. For instance, DeepSeek- LLM-7B surpasses its larger counterpart, DeepSeek-LLM- 67B, in the DFS environment. Similarly, Llama-7B-chat outperforms Llama-13B-chat in the Coin environment. Few-shot setting. The deviation from the scaling law becomes even more pronounced in the few-shot settings, as evidenced in Fig. 5. In these scenarios, medium and large models more frequently experience performance drops compared to smaller models. This pattern suggests that while larger models are often touted by developers for their superior performance across a range of benchmarks, their effectiveness may not uniformly extend to specialized domains such as algorithmic execution and interactive sequential reasoning. In these areas, the challenges are distinct from those encountered in conventional one-round Q&A formats, indicating a need to reconsider the scaling assumptions in LLM development for these specific applications.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1d03b161-4139-4b74-b634-00b4ce7576db
## 4.5. Teacher Guiding As evidenced in Fig. 6, even Llama2-7B-chat, which is a small model, yielded higher PSACC as the number of | Model | GuessNum | |---------------------------|------------| | ↑ | | | DFS | | | ↑ | | | BFS | | | ↑ | | | Coin | | | ↑ | | | CaveDFS | | | ↑ | | | CaveBFS | | | ↑ | | | Small
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
df9861c3-3d93-4b48-bfde-0f6d8eae94ad
## 4.5. Teacher Guiding | | | CaveBFS | | | ↑ | | | Small | | | < | | | 10B | | | Llama2-7B-chat | 0.06 | | Vicuna-7B | 0.04 | | Mistral-7B-Instruct-v02 | 0.09 | | DeepSeek-LLM-7B | 0.02 | | DeepSeek-MOE-16B | 0.04 | | 10B | | | ≤ | | | Medium | | | <
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
72a47664-bfee-465a-829c-fbc23ab18026
## 4.5. Teacher Guiding B | | | ≤ | | | Medium | | | < | | | 50B | | | Llama2-13B-chat | 0.21 | | Vicuna-13B | 0.19 | | Mixtral-8x7B-Instruct-v01 | 0.44 | | Large | | | ≥ | | | 50B | | | Llama2-70B-chat | 0.23 | | DeepSeek-LLM-67B | 0.35 | | Closed-source | | | GPT-3.5-Turbo |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
12260084-081b-46f0-a403-785cd91b4e38
## 4.5. Teacher Guiding | | Llama2-70B-chat | 0.23 | | DeepSeek-LLM-67B | 0.35 | | Closed-source | | | GPT-3.5-Turbo | 0.68 | | GPT-4-Turbo | | | 0.93 | 0.93 | | 0.89 | 0.75 | | Gemini-Pro | 0.56 | | 0.88 | | | 0.56 | | | 0.93 | 0.84 | steps grows, indicating the probability of executing the optimal policy improves over time, especially as correct decisions accumulate, rather than errors. In environments like DFS (CaveDFS) and BFS (CaveBFS), we noted that the differences in PSACC among various Llama2 models diminish when more guidance steps are provided by the teacher model. While larger models still tend to exhibit a higher average PSACC, as shown in Tab. 7, the gap narrows with increased teacher model intervention. However, it is important to note, as depicted in Fig. 6,
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7f9f4629-8c10-4219-a0fd-184d5a0e2fa8
## 4.5. Teacher Guiding 0.84 | steps grows, indicating the probability of executing the optimal policy improves over time, especially as correct decisions accumulate, rather than errors. In environments like DFS (CaveDFS) and BFS (CaveBFS), we noted that the differences in PSACC among various Llama2 models diminish when more guidance steps are provided by the teacher model. While larger models still tend to exhibit a higher average PSACC, as shown in Tab. 7, the gap narrows with increased teacher model intervention. However, it is important to note, as depicted in Fig. 6, that PSACC may begin to decline in the later stages of interaction. This decline can be attributed to the escalating complexity of adhering to the optimal policy as the model is required to track and remember previous steps, such as (implicitly) maintaining a queue of nodes in BFS and CaveBFS. These observations suggest that even a limited series of correct steps can significantly assist models in sequential reasoning tasks. Furthermore, for models possessing a sufficient level of sequential reasoning ability, this process may lead to a form of self-guidance, where the model reinforces its decisions based on prior correct actions.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7865b3ba-e19b-4512-825e-0e29d50f0dc5
## 5. Related Works Large Language Models model the probability of generating a sequence of text tokens. It is shown that when the number of parameters and pretrained data goes up, the resulting LLMs demonstrate emerging behaviors, *i.e.*, the model is able to perform tasks it cannot when the complexity is below a certain threshold. Although emerging behaviors are still under debate (Schaeffer et al., 2023), the success of GPT models (OpenAI, 2023) has attracted a lot of interest from both academia and industry. Instruction-tuned models (Ouyang et al., 2022) are shown to be more capable of zero-shot question answering and reasoning tasks. It has been discovered that using carefully designed prompts like chain-of-thought (Wei et al., 2022) can substantially enhance the performance of LLMs for multi-step reasoning; there has been a line of works in this direction for understanding and improving the prompting for LLMs (Fu et al., 2022; Zhou et al., 2022; Wang et al., 2022; Kojima et al., 2022). open-source models have also emerged from community efforts, based on the weights released by Meta (Touvron et al., 2023). Instruction-tuned models have shown their effectiveness (Taori et al., 2023), with Vicuna (Chiang et al., 2023) and Mixtral (Jiang et al., 2024) demonstrating close-to-GPT-3.5 performance on human benchmarks. In our evaluation, we found that despite the impressive chat abilities, there still exists a gap in the algorithmic reasoning abilities between open-source models and close-sourced models, suggesting more effort is needed to develop strong open-source LLMs. Benchmarking reasoning abilities. The performance of generative models is notoriously difficult to evaluate. To resolve this issue, Vicuna leverages a chatbot arena to let human users evaluate the output of LLMs (Chiang et al., 2023) in a pairwise model comparison fashion. Other than evaluating human preference, benchmarks for evaluating the reasoning abilities of LLMs also exist. For examples, GSM8k (Cobbe et al., 2021) evaluates the reasoning ability of LLMs using a set of grade school math word problems; MMLU (Hendrycks et al., 2020) further covers over 57 subjects and uses multiple-choice questions to evaluate the performance; MATH (Hendrycks et al., 2021b) proposes to evaluate
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d44bc5ff-e130-4855-9fa9-475e26132afa
## 5. Related Works ages a chatbot arena to let human users evaluate the output of LLMs (Chiang et al., 2023) in a pairwise model comparison fashion. Other than evaluating human preference, benchmarks for evaluating the reasoning abilities of LLMs also exist. For examples, GSM8k (Cobbe et al., 2021) evaluates the reasoning ability of LLMs using a set of grade school math word problems; MMLU (Hendrycks et al., 2020) further covers over 57 subjects and uses multiple-choice questions to evaluate the performance; MATH (Hendrycks et al., 2021b) proposes to evaluate the text generation instead of multiple-choices questions. Other than these one-round question-answering evaluations, multi-step interaction-based evaluation benchmarks also exist. For example, Pan et al. (2023) leverages adventure games to measure the ability of LLMs in social decision-making in an interactive manner; Liu et al. (2023a) uses multi-turn open-ended environments to evaluate LLMs. Another mainstream of evaluation focuses on code generation, including benchmarks like HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and APPS (Hendrycks et al., 2021a). Different from these existing works, our proposed AQA- Bench forms several interactive environments that focus on evaluating the LLMs' understanding of specific algorithm procedures. LLM-as-Agent. The study of LLM agents has been a highly active research area since the depute of advanced LLMs like ChatGPT (OpenAI, 2023). Prompting techniques such as Chain-of-though (Wei et al., 2022) and ReAct (Yao et al., 2023) elicit the reasoning and acting abilities of LLMs to allow them to act as agents. Benchmarks designed based on games (Fan et al., 2022; Gong et al., 2023) have been proposed to evaluate LLMs as agents to plan and act in a complex environment. There also exist benchmarks to measure the ability of LLMs agents to automate complex real-world tasks (Shen et al., 2023; Liu et al., 2023a; Deng et al., 2023). Additionally, interaction between multiple agents has also been explored (Hong et al., 2023; Liu et al., 2023b; Park et al., 2023). Our benchmark explores LLM-as-Agent from
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8ff7f181-a160-467d-ae0b-8414da518bf3
## 5. Related Works act as agents. Benchmarks designed based on games (Fan et al., 2022; Gong et al., 2023) have been proposed to evaluate LLMs as agents to plan and act in a complex environment. There also exist benchmarks to measure the ability of LLMs agents to automate complex real-world tasks (Shen et al., 2023; Liu et al., 2023a; Deng et al., 2023). Additionally, interaction between multiple agents has also been explored (Hong et al., 2023; Liu et al., 2023b; Park et al., 2023). Our benchmark explores LLM-as-Agent from the perspective of whether they are capable of following simple algorithm procedures, aiming to measure their sequential reasoning ability.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
79fb32f1-28f7-41a0-bffa-e69661085952
## 6. Conclusion In this study, we embark on an initial exploration into the realm of evaluating LLMs within interactive environments. These environments necessitate a deep understanding of specific algorithmic procedures by the LLMs, ranging from efficiently guessing a number within minimal steps to strategically searching for unvisited nodes in a graph. Our comprehensive evaluation reveals a notable performance gap between current open-source and closed-sourced models, with the latter showing superior capabilities in these tasks. We expect future efforts to focus on introducing a broader range of interactive environments and developing more effective prompting strategies to better equip LLMs for these benchmarks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
173cd7e3-982d-47a9-be5a-2f9350cdfb07
## Impact Statement This study focuses on benchmarking LLMs in interactive environments to understand how well they can understand and follow certain algorithmic procedures. We do not expect our work to pose a potential negative societal impact but rather to provide a new benchmark for improving our understanding of powerful LLMs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0f7232c8-cb10-4e6f-9725-efc1097850b1
## Acknowledgement This work is partially supported by a gift from Open Philanthropy. We thank the Center for AI Safety, the Microsoft Accelerate Foundation Models Research Program, the OpenAI Researcher Access Program, and the Google Cloud Research Credits Program for supporting our computing needs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3daf4e9e-9668-427e-86a8-57a85e1eb68d
## A. Variance Of Gpt Models Since the GPT models can only be accessed through the OpenAI API, they inherently cannot be made deterministic. Here we evaluated GPT models on the same dataset for 4 times to limit the source of variance to the GPT models themselves. It is shown in Tabs. 8 and 9 that Marginmin and Marginmax are a lot smaller than metric difference between GPT-3.5-Turbo and GPT-4-Turbo This demonstrates that 400 test cases are sufficient to alleviate the impact of GPTs' randomness on the evaluation results. Thus, the experimental results of GPTs we report in the following context are only from a single-time evaluation instead of summary of four different evaluations. | GuessNum | DFS | BFS | |---------------|-------|-------| | Err | | | | min | | | | ↓ | | | | Err | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
eaecb5aa-675a-43b8-815e-5bf45d5d100d
## A. Variance Of Gpt Models | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | GPT-3.5-Turbo |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
30bb3fa5-727c-4f71-b828-2a247305f7b6
## A. Variance Of Gpt Models | | | | ↓ | | | | ACC | | | | ↑ | | | | GPT-3.5-Turbo | | | | Avg | 0 | 0.513 | | Margin | | | | min | | | | 0.000 | 0.003 | 0.002 | | Margin | | | | max | | | | 0.000 | 0.004 | 0.003 | | Avg | 0 | 0.496 | | Margin | | | | min | | | | 0.000 | 0 | 0.033 | | Margin | | | | max | | | | 0.000 | 0 | 0.012 | | Coin | CaveDF
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5cf745ab-0783-4c16-9ce9-ec7ed83fd371
## A. Variance Of Gpt Models | | | 0.000 | 0 | 0.033 | | Margin | | | | max | | | | 0.000 | 0 | 0.012 | | Coin | CaveDFS | CaveBFS | |---------------|-----------|-----------| | Err | | | | min | | | | ↓ | | | | Err | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dab95d35-a0ce-4ed6-81f7-8303202c9051
## A. Variance Of Gpt Models | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
56bd0648-9726-4236-86de-79edce150942
## A. Variance Of Gpt Models | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | GPT-3.5-Turbo | | | | Avg | 0.001 | 1.013 | | Margin | | | | min | | | | 0.001 | 0.017 | -0 | | Margin | | | | max | | | | 0.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9cc436f0-6267-45de-875a-f45f307516cb
## A. Variance Of Gpt Models | | | 0.001 | 0.017 | -0 | | Margin | | | | max | | | | 0.001 | 0.008 | 0 | | GPT-4-Turbo | | | | Avg | 0 | 0.496 | | Margin | | | | min | | | | 0.000 | 0 | 0.007 | | Margin | | | | max | | | | 0.000 | 0 | 0.007 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c9d13c8d-feba-4d2f-a45f-1e2f35e2a63c
## B. Variance On Hard Mode This section presents the variance of the models under that HARD testing protocol. Results are in Tabs. 10 and 11. We can see that overall, under the HARD mode, the performance of the models do not show a strong variance. | GuessNum | DFS | BFS | |--------------------|-------|--------| | Err | | | | min | | | | ↓ | | | | Err | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d4a24221-5627-4064-8d81-cb44ef2028d8
## B. Variance On Hard Mode | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2216f4ef-f337-4b88-8501-7f281ad43acb
## B. Variance On Hard Mode | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | Llama2-7B-chat | | | | Avg | 0.498 | 14.956 | | Margin | | | | min | | | | 0.006 | 0.187 | -0 | | Margin | | | | max | | | | 0.011 | 0.332 | 0 | | Vicuna-7B-v1.5-16K | | | | Avg | 0.476 | 9.606 | | Margin | | | | min
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
efc16c8c-13f1-4f6c-9b47-1bee2c74becf
## B. Variance On Hard Mode | 0.011 | 0.332 | 0 | | Vicuna-7B-v1.5-16K | | | | Avg | 0.476 | 9.606 | | Margin | | | | min | | | | 0.008 | 0.012 | -0 | | Margin | | | | max | | | | 0.006 | 0.022 | 0 | | Coin | CaveDFS | CaveBFS | |--------------------|-----------|-----------| | Err | | | | min | | | | ↓ | | | | Err | | | | sum |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a9eac26-7d7b-4455-a933-d8a90f7c71bd
## B. Variance On Hard Mode | | | ↓ | | | | Err | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ae7c833e-d569-49ec-9ed7-fccd61caf8b7
## B. Variance On Hard Mode | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
228f7873-ee60-4cc4-a0e0-bced4f537951
## B. Variance On Hard Mode | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | Llama2-7B-chat | | | | Avg | 0.079 | 5.256 | | Margin | | | | min | | | | 0.005 | 0.238 | 0 | | Margin | | | | max | | | | 0.008 | 0.269 | 0 | | Vicuna-7B-v1.5-16K |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fc4779d8-0aae-4f23-b21d-d5df562f7bbc
## B. Variance On Hard Mode | | | | max | | | | 0.008 | 0.269 | 0 | | Vicuna-7B-v1.5-16K | | | | Avg | 1 | 1 | | Margin | | | | min | | | | 0.000 | 0 | 0 | | Margin | | | | max | | | | 0.000 | 0 | 0 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f01f461d-43be-4953-8986-fb93a863f1de
## C. Complete Results Some tables in the main text above only contain goal metrics for simplicity. Here we present the complete evaluation results with both goal metrics (*e.g.*, Errmin, Gmin) and policy metrics (Errsum, Gmin, ACC) in Tabs. 12 to 15. Model GuessNum DFS BFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Small < 10B Llama2-7B-chat 0.08 (-0.18) 2.32 (-5.39) 0.10 (+0.10) 0.39 (-0.19) 9.04 (+5.31) 0.23 (-0.01) 0.65 (+0.05) 8.28 (-1.52) 0.14 (+0.14) Vicuna-7B-v1.5-16K 0.02 (-0.44) 1.27 (-7.97) 0.22 (+0.22) 0.37 (-0.28) 8.61 (+2.82) 0.27 (+0.12) 0.68 (-0.16) 13.10 (+2.81) 0.15 (+0.12) Mistral-7B-Instruct-v02 0.01 (-0.05) 1.07 (-0.95) 0.22 (+0.22) 0.14 (-0.35) 5.74 (+3.02) 0.51 (-0.10) 0.39 (+0.15) 9.87 (+1.15) 0.17 (+0.04) DeepSeek-LLM-7B 0.04 (-0.39) 1.50 (-7.74) 0.18 (+0.18) 0.16 (-0.18) 6.93 (+0.34) 0.17 (-0.19) 0.61 (+0.09) 11.43 (+0.23) 0.18 (+0.12) DeepSeek-MoE-16B 0.02 (-0.98) 1.51 (+
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7149b139-addb-430e-91c8-71d4082dad6e
## C. Complete Results 1.15) 0.17 (+0.04) DeepSeek-LLM-7B 0.04 (-0.39) 1.50 (-7.74) 0.18 (+0.18) 0.16 (-0.18) 6.93 (+0.34) 0.17 (-0.19) 0.61 (+0.09) 11.43 (+0.23) 0.18 (+0.12) DeepSeek-MoE-16B 0.02 (-0.98) 1.51 (+0.51) 0.21 (+0.21) 0.14 (-0.49) 6.75 (+1.97) 0.30 (+0.23) 0.86 (-0.02) 2.60 (-5.58) 0.10 (+0.08) 10B ≤ Medium < 50B Llama2-13B-chat 0.06 (+0.05) 1.89 (-1.35) 0.13 (+0.13) 0.50 (+0.16) 10.75 (+4.77) 0.18 (-0.23) 0.57 (-0.08) 11.48 (+0.89) 0.09 (+0.04) Vicuna-13B-v1.5-16K 0.12 (-0.27) 5.42 (-2.89) 0.12 (+0.12) 0.16 (-0.50) 5.24 (-7.99) 0.63 (+0.51) 0.23 (-0.58) 8.14 (-7.47) 0.27 (+0.22) Mixtral-8x7B-Instruct-v01 0.00 (+0.00) 0.56 (-0.13) 0.25 (+0.25) 0.20 (-0.27) 6.46 (+3.14) 0.44 (-0.13) 0.48 (+0.34) 11.34 (+3.98) 0.21 (+0.00) Large ≥ 50B Llama2-70B-chat 0.07 (-0.04) 1.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
85715d50-a83f-480e-a46b-e415ce6d2883
## C. Complete Results 27 (+0.22) Mixtral-8x7B-Instruct-v01 0.00 (+0.00) 0.56 (-0.13) 0.25 (+0.25) 0.20 (-0.27) 6.46 (+3.14) 0.44 (-0.13) 0.48 (+0.34) 11.34 (+3.98) 0.21 (+0.00) Large ≥ 50B Llama2-70B-chat 0.07 (-0.04) 1.96 (-0.68) 0.13 (+0.13) 0.14 (-0.19) 5.88 (+1.49) 0.46 (+0.02) 0.46 (+0.18) 9.46 (-0.68) 0.11 (+0.05) DeepSeek-LLM-67B 0.00 (-0.12) 0.58 (-5.04) 0.25 (+0.25) 0.18 (-0.22) 6.79 (+2.45) 0.33 (-0.09) 0.36 (-0.09) 10.40 (-1.19) 0.18 (+0.09) Closed-source GPT-3.5-Turbo 0.00 (-0.00) 0.52 (+0.01) 0.01 (+0.00) 0.36 (+0.01) 5.30 (+0.09) 0.62 (+0.01) 0.12 (+0.01) 6.63 (-0.05) 0.51 (-0.01) GPT-4-Turbo 0.00 (-0.00) 0.50 (+0.00) 0.47 (+0.01) 0.02 (-0.01) 3.93 (+0.00) 0.94 (+0.00) 0.00 (+0.00) 6.08 (-0.00) 0.40 (+0.02) Gemini-Pro 0.00 (+0.00) 0.51 (-0.12) 0.43 (+0.43) 0.02 (-0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2843dc3f-31c7-4a13-8364-fc600e6910ab
## C. Complete Results GPT-4-Turbo 0.00 (-0.00) 0.50 (+0.00) 0.47 (+0.01) 0.02 (-0.01) 3.93 (+0.00) 0.94 (+0.00) 0.00 (+0.00) 6.08 (-0.00) 0.40 (+0.02) Gemini-Pro 0.00 (+0.00) 0.51 (-0.12) 0.43 (+0.43) 0.02 (-0.23) 4.57 (+0.86) 0.68 (-0.08) 0.03 (-0.03) 6.59 (-0.80) 0.36 (+0.19) Model Coin CaveDFS CaveBFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Small < 10B Llama2-7B-chat 0.11 (+0.04) 2.70 (-2.32) 0.08 (+0.08) 0.38 (-0.12) 8.79 (+4.14) 0.26 (-0.07) 0.58 (-0.18) 11.27 (+5.61) 0.11 (+0.06) Vicuna-7B-v1.5-16K 0.02 (-0.98) 1.13 (+0.13) 0.22 (+0.22) 0.39 (-0.15) 8.88 (+0.84) 0.25 (+0.04) 0.68 (-0.04) 13.48 (-0.91) 0.14 (+0.07) Mistral-7B-Instruct-v02 0.01 (-0.06) 1.15 (-2.44) 0.22 (+0.22) 0.18 (-0.31) 6.28 (+1.41) 0.45 (-0.03) 0.48 (+0.21) 10.24 (+0.38) 0.15 (+0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2ac96dbe-1846-4cbf-aca3-2d2911905288
## C. Complete Results .84) 0.25 (+0.04) 0.68 (-0.04) 13.48 (-0.91) 0.14 (+0.07) Mistral-7B-Instruct-v02 0.01 (-0.06) 1.15 (-2.44) 0.22 (+0.22) 0.18 (-0.31) 6.28 (+1.41) 0.45 (-0.03) 0.48 (+0.21) 10.24 (+0.38) 0.15 (+0.04) DeepSeek-llm-7B 0.04 (-0.35) 1.53 (-7.29) 0.17 (+0.17) 0.19 (-0.39) 6.80 (-2.28) 0.33 (+0.17) 0.62 (-0.15) 12.06 (+1.39) 0.17 (+0.13) DeepSeek-moe-16B 0.02 (-0.98) 1.61 (+0.61) 0.21 (+0.21) 0.13 (-0.58) 6.71 (+3.72) 0.34 (+0.23) 0.87 (-0.02) 2.71 (-0.10) 0.08 (+0.07) 10B ≤ Medium < 50B Llama2-13B-chat 0.05 (-0.14) 1.98 (-5.95) 0.13 (+0.13) 0.48 (+0.10) 10.50 (+3.02) 0.19 (-0.17) 0.56 (+0.01) 11.65 (-1.07) 0.11 (+0.02) Vicuna-13B-v1.5-16K 0.13 (-0.87) 5.48 (+4.48) 0.12 (+0.12) 0.15 (-0.41) 5.50 (-2.68) 0.59 (+0.38) 0.27 (-0.37) 8.54 (-2.74) 0.27 (+0.21) Mixtral
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a62bec6b-5dec-451d-82ac-6fd253e97cb7
## C. Complete Results ) 0.56 (+0.01) 11.65 (-1.07) 0.11 (+0.02) Vicuna-13B-v1.5-16K 0.13 (-0.87) 5.48 (+4.48) 0.12 (+0.12) 0.15 (-0.41) 5.50 (-2.68) 0.59 (+0.38) 0.27 (-0.37) 8.54 (-2.74) 0.27 (+0.21) Mixtral-8x7B-Instruct-v01 0.00 (-0.00) 0.64 (-0.14) 0.23 (+0.23) 0.17 (-0.15) 6.27 (+1.66) 0.43 (-0.02) 0.39 (+0.24) 10.40 (+1.92) 0.21 (+0.04) Large ≥ 50B Llama2-70B-chat 0.09 (+0.09) 2.37 (+1.86) 0.12 (+0.12) 0.20 (-0.15) 6.66 (+2.13) 0.42 (-0.02) 0.60 (+0.30) 8.39 (-2.12) 0.08 (+0.05) DeepSeek-llm-67B 0.00 (-0.36) 0.57 (-7.26) 0.24 (+0.24) 0.18 (-0.10) 6.67 (+1.43) 0.37 (-0.20) 0.39 (+0.01) 10.54 (-0.35) 0.21 (+0.13) Closed-source GPT-3.5-Turbo 0.02 (+0.02) 1.02 (+0.02) 0.00 (+0.00) 0.19 (-0.01) 4.83 (-0.04) 0.66 (+0.00) 0.27 (-0.00) 9.56 (+0.07) 0.10 (+0.00) GPT-4-Turbo 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9a1f376b-3472-4b37-b0d3-27e7724dcd47
## C. Complete Results 0.01) 10.54 (-0.35) 0.21 (+0.13) Closed-source GPT-3.5-Turbo 0.02 (+0.02) 1.02 (+0.02) 0.00 (+0.00) 0.19 (-0.01) 4.83 (-0.04) 0.66 (+0.00) 0.27 (-0.00) 9.56 (+0.07) 0.10 (+0.00) GPT-4-Turbo 0.00 (-0.00) 0.50 (-0.00) 0.50 (+0.00) 0.23 (-0.00) 3.49 (-0.13) 0.76 (+0.02) 0.11 (-0.01) 8.07 (-0.00) 0.16 (+0.00) Gemini-Pro 0.00 (+0.00) 0.51 (-0.09) 0.41 (+0.41) 0.04 (-0.18) 5.17 (+0.06) 0.54 (-0.16) 0.05 (-0.05) 6.79 (-1.18) 0.33 (+0.17) Model GuessNum DFS BFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Small < 10B Llama2-7B-chat 0.49 14.77 0.00 0.74 7.24 0.19 0.76 16.34 0.01 Vicuna-7B-v1.5-16K 0.24 14.98 0.00 0.78 10.97 0.10 0.89 17.16 0.02 Mistral-7B-Instruct-v02 0.06 3.43 0.00 0.65 4.11 0.61 0.46 16.29 0.08 DeepSeek-LLM-7B 0.49 6
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b8c9cd77-5244-416f-a138-df577c835e2d
## C. Complete Results .19 0.76 16.34 0.01 Vicuna-7B-v1.5-16K 0.24 14.98 0.00 0.78 10.97 0.10 0.89 17.16 0.02 Mistral-7B-Instruct-v02 0.06 3.43 0.00 0.65 4.11 0.61 0.46 16.29 0.08 DeepSeek-LLM-7B 0.49 6.42 0.00 0.61 16.07 0.18 0.71 19.62 0.04 DeepSeek-MoE-16B 1.00 1.00 0.00 0.78 8.96 0.03 0.92 11.38 0.01 10B ≤ Medium < 50B Llama2-13B-chat 0.49 14.77 0.00 0.59 11.21 0.25 0.76 17.27 0.03 Vicuna-13B-v1.5-16K 0.49 14.77 0.00 0.80 20.45 0.07 0.83 24.92 0.03 Mixtral-8x7B-Instruct-v01 0.00 1.46 0.00 0.64 4.69 0.58 0.32 13.49 0.13 Large ≥ 50B Llama2-70B-chat 0.49 14.77 0.00 0.48 9.01 0.35 0.43 18.65 0.04 DeepSeek-LLM-67B 0.00 0.70 0.00 0.51 9.48 0.28 0.67 21.98 0.05 Closed-source GPT-3.5-Turbo 0.00 0.59 0.00 0.55 8.76 0.51 0.27 13.30 0.29 GPT-4
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7fb29cd-e8bb-4915-b6ea-46f3dbf3db5d
## C. Complete Results .00 0.48 9.01 0.35 0.43 18.65 0.04 DeepSeek-LLM-67B 0.00 0.70 0.00 0.51 9.48 0.28 0.67 21.98 0.05 Closed-source GPT-3.5-Turbo 0.00 0.59 0.00 0.55 8.76 0.51 0.27 13.30 0.29 GPT-4-Turbo 0.00 0.52 0.04 0.08 7.71 0.87 0.01 11.14 0.26 Gemini-Pro 0.00 0.81 0.00 0.33 7.36 0.69 0.12 13.59 0.09 Model Coin CaveDFS CaveBFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Small < 10B Llama2-7B-chat 0.49 14.77 0.00 0.68 9.78 0.19 0.83 12.04 0.04 Vicuna-7B-v1.5-16K 0.49 14.77 0.00 0.70 15.89 0.13 0.83 24.45 0.05 Mistral-7B-Instruct-v02 0.08 5.30 0.00 0.61 6.96 0.50 0.49 18.45 0.07 DeepSeek-LLM-7B 0.49 1.98 0.00 0.74 16.02 0.11 0.86 17.20 0.03 DeepSeek-MOE-16B 1.00 1.00 0.00 0.86 2.74 0.05 0.94 2.65 0.01 10B ≤ Medium < 50
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5dd4a70c-2fae-422a-bc8e-bc374de151df
## C. Complete Results 30 0.00 0.61 6.96 0.50 0.49 18.45 0.07 DeepSeek-LLM-7B 0.49 1.98 0.00 0.74 16.02 0.11 0.86 17.20 0.03 DeepSeek-MOE-16B 1.00 1.00 0.00 0.86 2.74 0.05 0.94 2.65 0.01 10B ≤ Medium < 50B Llama2-13B-chat 0.08 10.73 0.00 0.56 13.56 0.28 0.68 22.20 0.06 Vicuna-13B-v1.5-16K 1.00 1.00 0.00 0.65 14.78 0.17 0.71 20.33 0.05 Mixtral-8x7B-Instruct-v01 0.07 2.22 0.00 0.50 8.21 0.38 0.30 15.46 0.09 Large ≥ 50B Llama2-70B-chat 0.08 13.72 0.00 0.49 9.35 0.33 0.46 18.94 0.02 DeepSeek-LLM-67B 0.02 2.15 0.00 0.39 10.75 0.40 0.56 20.05 0.06 Closed-source GPT-3.5-Turbo 0.37 4.81 0.00 0.33 9.98 0.56 0.45 17.51 0.07 GPT-4-Turbo 0.00 0.52 0.04 0.33 7.04 0.67 0.19 14.67 0.09 Gemini-Pro 0.00 1.08 0.00 0.35 10.00 0.56 0.23 15.28 0.09
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9f317d73-0dee-4d0f-acfe-026208743371
## D. Prompt Instructions For The Models In this section, we present the prompts we fed to the models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
215a9258-3feb-41d2-a47a-509cc72755a2
## D.1. Base Environments D.1.1. Guessnum You are required to guess the random number which I have just picked between \{min\} and \{max\}. I will only tell you whether the true number is bigger or lower than your guess. Adjust your guess according to my response. Try as few times as you can. You can only reply with an integer number between \{min\} and \{max\}.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8183c49f-1ed1-40e6-8aab-5617880fe304
## D.1.2. Dfs You are required to visit all the nodes in an undirected non-cyclic graph. An undirected non-cyclic graph contains a set of nodes and a set of edges that each connect a pair of nodes. All edges are undirected so that you can move from one node to the other connected by the edge in either direction. Every time you visit a node, you will be given the adjacent nodes connected to this node. You can only reply with an integer number indicating which node to be visited next. Do not explain your answer. Try to traverse the entire graph in as few rounds as possible. You are currently on the node 0. You should use depth-first-search algorithm, each time you should select a node you have not moved to. If all nodes adjacent to the current node have been visited, you should backtrack to the node through which you entered this node for the first time.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
80d73b55-6460-4c69-add0-a19de8070f95
## D.1.3. Bfs You are required to visit all the nodes in an undirected non-cyclic graph. An undirected non-cyclic graph contains a set of nodes, and a set of edges that each connects a pair of nodes. Every time you visit a node, you will be given the adjacent nodes connected to this node. You can only visit nodes that are adjacent to the already visited nodes. You can only reply with an integer number indicating which node to be visited next. Do not explain your answer. Try to traverse the entire graph in as few rounds as possible. You are currently on the node 0. You should use breadth-first-search algorithm. The algorithm works as follows: 1. Initialize a queue data structure and add the starting node to the queue. 2. While the queue is not empty, visit the first node and remove it from the queue. 3. For nodes adjacent to the removed vertex, add the unvisited ones to the queue. 4. Repeat steps 2-3 until the queue is empty.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
84992e92-a66f-4146-beec-52bdcff8579f
## D.2. Embodied Environments D.2.1. Coin You are in a hidden temple where an old witch sits with a chest of gold. The witch promises to reward you with gold coins, the amount hidden within the chest ranging from \{min\} and \{max\}. To claim your prize, you must correctly guess the exact number of gold coins in the chest. After each guess, the witch will hint if the actual amount is higher or lower than your guess. Use these clues to adjust your guess accordingly. Try as few times as you can. You can only reply with an integer number between \{min\} and \{max\}.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
86656f9f-37ac-4740-b0b0-c40344125ea2
## D.2.2. Cavedfs There is an expansive underground cave system in which each cave is uniquely numbered and interconnected by tunnels. Every time you visit a cave, you will know the adjacent caves directly connected to this one. You can only reply with an integer number indicating which cave to be visited next. Do not explain your answer. Your objective is to explore every cave, starting from cave 0. Try to visit all the caves in as few rounds as possible. You are currently in the cave 0.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4b03675d-7351-4496-9369-d5adb7202696
## D.2.3. Cavebfs There is an expansive underground cave system in which each cave is uniquely numbered and interconnected by tunnels. Every time you and your team visit a cave, you will know the adjacent caves directly connected tno this one. Your team will then split into smaller groups to explore different caves, but groups can only move to caves adjacent to the visited cave. You can only reply with an integer number indicating which cave to be visited next. Do not explain your answer. Your objective is to explore every cave, starting from cave 0. Try to visit all the caves in as few rounds as possible. You and your team are currently in the cave 0.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2f3d70be-12a6-43cb-98d6-13a3becf2605
## E. Model Versions We used the checkpoint '1106' for GPT-3.5 and GPT-4.0. The open-source model and the corresponding commit ID on HuggingFace are listed as below - **Llama2-7B-chat** c1b0db933684edbfe29a06fa47eb19cc48025e93 - **Llama2-13B-chat** c2f3ec81aac798ae26dcc57799a994dfbf521496 - **Llama2-70B-chat** e1ce257bd76895e0864f3b4d6c7ed3c4cdec93e2 - **Vicuna-7B-v1.5-16K** c8df3ca4436a3bce5c4b5877e0117032081852b4 - **Vicuna-13B-v1.5-16K** 17c61f9ca19f5a7a04e96b2cc0d9bcf2920cb8c2 - **Mistral-7B-Instruct-v0.2** b70aa86578567ba3301b21c8a27bea4e8f6d6d61 - **Mixtral-8x7B-Instruct-v0.1** 125c431e2ff41a156b9f9076f744d2f35dd6e67a - **DeepSeek-LLM-7B** afbda8b347ec881666061fa67447046fc5164ec8 - **DeepSeek-LLM-67B** 79648bef7658bb824e4630740f6e1484c1b0620b - **DeepSeek-MoE-16B** cc01c87767bd905af4cb364693fd107014694ab9
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
42d7d46c-d917-4763-8b69-b4462d1b8e0f
Aishwarya P S 1 Pranav Ajit Nair 1 Yashas Samaga 1 Toby Boyd 2 Sanjiv Kumar 3 **Prateek Jain** * 1 Praneeth Netrapalli * 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
85daa192-2ac1-40b1-9a54-ce032e286558
## Abstract The autoregressive nature of conventional large language models (LLMs) inherently limits inference speed, as tokens are generated sequentially. While speculative (Leviathan et al., 2023) and parallel (Stern et al., 2018) decoding techniques attempt to mitigate this, they face limitations: either relying on less accurate smaller models for generation or failing to fully leverage the base LLM's representations. We introduce a novel architecture, Tandem transformers, to address these issues. This architecture uniquely combines (1) a small autoregressive model and (2) a large model operating in block mode (processing multiple tokens simultaneously). The small model's predictive accuracy is substantially enhanced by granting it attention to the large model's richer representations. On the PaLM2 pretraining dataset, a tandem of PaLM2- Bison and PaLM2-Gecko demonstrates a 3.3% improvement in next-token prediction accuracy over a standalone PaLM2-Gecko, offering a 1.16x speedup compared to a PaLM2-Otter model with comparable downstream performance. We further incorporate the tandem model within the speculative decoding (SPEED) framework where the large model validates tokens from the small model. This ensures that the Tandem of PaLM2-Bison and PaLM2-Gecko achieves substantial speedup (around 1.14× faster than using vanilla PaLM2- Gecko in SPEED) while maintaining identical downstream task accuracy.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3f4c916d-88ee-4908-a196-a97b17d33f00
## 1. Introduction Despite significant advancements in inference optimization techniques (Leviathan et al., 2023; Du et al., 2022; Liu et al., 2023), the widespread deployment of very large language models (LLMs) remains hindered by their substantial computational costs. A key factor contributing to high inference latency is the autoregressive generation process, where tokens are produced sequentially. This inherent limitation restricts the full utilization of ML accelerators (GPUs/TPUs), which are optimized for matrix-matrix multiplications rather than the matrix-vector operations prevalent in LLMs. Consequently, prompt processing (where all tokens are handled simultaneously) is significantly more efficient than autoregressive response generation. On the other hand, it is not well understood how much capacity is required to understand the prompt/query/prefill (natural language understanding aka NLU) vs the capacity required to generate a response (natural language generation aka NLG). Current decoder-only LLM architectures tightly couple both these tasks. Tandem Transformers. In this work, we investigate this fundamental question from an efficiency perspective. We propose Tandem Transformers, a novel architecture that allocates significantly more model capacity to prefill processing (NLU) compared to response generation (NLG). Our goal is to understand whether high-quality response generation can be maintained under this design. Concretely, Tandem transformers consists of two models - a small model MS and a large model ML, where: 1. ML processes the prompt/query. 2. MS generates the first γ tokens (called a *block*) autoregressively, while attending to the prompt/query representations generated by ML. 3. ML processes the γ tokens generated by MS together (i.e., in a non-autoregressive fashion) and computes their representations. 4. MS then generates the next γ tokens autoregressively, while attending to representations of all tokens until the previous prefill *block* generated by ML. 5. This process is repeated until the response generation is complete. Tandem Transformer Training. We introduce a projection layer to align the potentially higher-dimensional representation space of ML with that of MS. For efficiency, we initialize ML and MS as independently trained, standard decoder-only models. Experiments with Tandem (PaLM2-Bison, PaLM2-Gecko) (where PaLM2-Gecko < PaLM2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
07b6c95f-8f73-4318-97b2-90b243ccd66f
## 1. Introduction generates the next γ tokens autoregressively, while attending to representations of all tokens until the previous prefill *block* generated by ML. 5. This process is repeated until the response generation is complete. Tandem Transformer Training. We introduce a projection layer to align the potentially higher-dimensional representation space of ML with that of MS. For efficiency, we initialize ML and MS as independently trained, standard decoder-only models. Experiments with Tandem (PaLM2-Bison, PaLM2-Gecko) (where PaLM2-Gecko < PaLM2-Otter < PaLM2-Bison, in terms of model size) demonstrate that the capacity needed for NLU vs NLG aspects of LLMs can indeed be decoupled, leading to a more efficient architecture without significant accuracy loss. Evaluation on benchmark datasets show that Tandem (PaLM2-Bison, PaLM2-Gecko) with block length γ = 3 is substantially more accurate than PaLM2-Gecko, and comparable to PaLM2-Otter, while achieving approximately 1.16× lower inference latency than PaLM2-Otter. For example, on SuperGLUE (Wang et al., 2019), the tandem model is 3% less accurate than PaLM2-Bison, 16% more accurate than PaLM2-Gecko and 0.2% less accurate than PaLM2-Otter, with 1.16× speedup over PaLM2-Otter. Encoder-Decoder. In contrast to an encoder-decoder architecture which would only process query/prefix through an encoder and then generate the entire response through a decoder, Tandem is able to generate only block-size γ (say = 3) tokens through the secondary model MS and then refresh the entire prefill representations using primary model ML which is critical to maintaining high accuracy. That is, by setting γ = 0, Tandem can mimic decoder-only ML model while setting γ *→ ∞* leads to decoder-only MS model. Tandem + SPEED. For applications requiring output identical to the primary model, we propose Tandem + SPEED. The speculative decoding (SPEED) framework (Leviathan et al., 2023) leverages the small model MS in Tandem to
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
23286a56-ac44-48a3-8ba4-39b7691d1c9e
## 1. Introduction = 3) tokens through the secondary model MS and then refresh the entire prefill representations using primary model ML which is critical to maintaining high accuracy. That is, by setting γ = 0, Tandem can mimic decoder-only ML model while setting γ *→ ∞* leads to decoder-only MS model. Tandem + SPEED. For applications requiring output identical to the primary model, we propose Tandem + SPEED. The speculative decoding (SPEED) framework (Leviathan et al., 2023) leverages the small model MS in Tandem to generate draft tokens, which are then verified by the large model ML. Crucially, the ability of MS in Tandem to attend to ML's representations significantly improves draft quality, reducing verification overhead compared to standard SPEED. For example, on the Reddit Posts dataset, using the MS in Tandem as the drafter model in SPEED leads to about 11.24% higher per-block acceptance rate compared to a vanilla secondary model. Finally, we show that Tandem transformers can be further improved using logit distillation and their efficacy within SPEED can be improved using an adaptive block length parameter. Contrast with Parallel Decoding and Distillation. Recently multiple speculative or parallel decoding style techniques have been proposed in the literature (Leviathan et al., 2023; Kim et al., 2023; Stern et al., 2018). These techniques attempt to generate a draft of tokens using a relatively inexpensive drafter model. Parallel decoding attempts to generate multiple drafter tokens in parallel by learning classifiers on top of output of primary model ML while speculative decoding could provide significantly better drafts by using a small, but auto regressive model. In contrast, Tandem is a *stand alone* model on its own and doesn't natively require verification by ML to generate reasonable outputs (see benchmark numbers in Table 3). Furthermore, Tandem + SPEED is able to use representations of ML while still generating tokens autoregressively, which is able to provide overall much better tradeoff in terms of token quality vs model latency for the drafter. Finally, recent works have also shown the efficacy of logit distillation for training better drafter models within SPEED (Zhou et al., 2023). Our approach is complementary, and can be combined with distillation. Empirical Results for Tandem + SPEED. Finally, we conduct extensive latency evaluation on TPUv
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ee271f0d-1f1b-40d2-bf38-90c37790dd15
## 1. Introduction reasonable outputs (see benchmark numbers in Table 3). Furthermore, Tandem + SPEED is able to use representations of ML while still generating tokens autoregressively, which is able to provide overall much better tradeoff in terms of token quality vs model latency for the drafter. Finally, recent works have also shown the efficacy of logit distillation for training better drafter models within SPEED (Zhou et al., 2023). Our approach is complementary, and can be combined with distillation. Empirical Results for Tandem + SPEED. Finally, we conduct extensive latency evaluation on TPUv5e for both standa alone and SPEED versions of Tandem (PaLM2- Bison, PaLM2-Gecko) with PaLM2-Bison and PaLM2- Gecko being the primary ML and secondary MS model, respectively. In particular, on multiple datasets, we observe that Tandem + SPEED with distillation can be at least 2.19× faster than the baseline PaLM2-Bison model while ensuring same output quality. Furthermore, compared to standard SPEED with MS being secondary model, our model is 1.11× to 1.17× faster. An adaptive block length in SPEED further helps reduce Tandem's latency by 1.04× to 1.09× on multiple datasets. Finally, we demonstrate that our results also hold for practical settings like batch-size > 1. Contributions. In summary, following are the key contributions of the work: 1. Tandem architecture: A novel architecture to disaggregate prompt/prefill processing capacity from response generation. 2. Tandem + SPEED: Improved speculative decoding leveraging Tandem's superior drafting for guaranteed output equivalence with lower latency. 3. Adaptive Block Length: Enhances Tandem + SPEED by dynamically adjusting drafted token count. 4. TPUv5e evaluation: End-to-end evaluation on TPUv5e with PaLM2-Bison being the primary model. A distilled Tandem + SPEED is 2.4x faster compared to vanilla PaLM2-Bison model and 1.11 − 1.17× faster compared to distilled MS + SPEED (Leviathan et al., 2023) applied in the same setting. Outline of the paper : The rest of the paper is organized as follows
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3c19222c-09fd-40c1-b37f-ef6b45f26c3c
## 1. Introduction : Enhances Tandem + SPEED by dynamically adjusting drafted token count. 4. TPUv5e evaluation: End-to-end evaluation on TPUv5e with PaLM2-Bison being the primary model. A distilled Tandem + SPEED is 2.4x faster compared to vanilla PaLM2-Bison model and 1.11 − 1.17× faster compared to distilled MS + SPEED (Leviathan et al., 2023) applied in the same setting. Outline of the paper : The rest of the paper is organized as follows. We briefly review related work in Section 2. In Section 3, we present the main ideas and the design of Tandem transformers architecture. Section 4 presents the experimental results on Tandem transformers. We then conclude with some future directions in Section 6.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
acc7e08f-34f5-4ac1-a046-9e54014f1c92
## 2. Related Work Encoder-Decoder models : Encoder-decoder transformer architectures are widely used for specific tasks such as machine translation (Vaswani et al., 2017). Given the computational inefficiency of autoregressive decoding, several works have explored using a large encoder with a small decoder. Our work can be seen as extending these ideas to use an encoder-decoder model for the decoder itself. Mixture of experts (MoE)/Sparsity based approaches : Mixture of experts (Du et al., 2022) and sparsity based approaches (Li et al., 2022) have also been studied for optimizing inference cost of LLMs. However these approaches are complementary to the approaches proposed in our paper. For example, either or both the large model ML and small model MS can be an MoE or sparse model. Distillation : Since the seminal paper (Hinton et al., 2015), distilling the knowledge of a large model to a smaller model by using the logits of large model as a training target has been widely used in several settings. Our work can be seen as a more general version of distillation for transformers, where the small model can directly refer to large model representations for tokens from previous blocks. Furthermore, our experiments (see Section 4) show that our techniques are complementary to logit distillation, and provide additional gains on top of vanilla logit distillation. Speculative decoding (SPEED) : Speculative decoding (Leviathan et al., 2023; Kim et al., 2023) is a framework to reduce inference latency of LLMs without affecting their quality, which has shown substantial improvements in LLM inference. We demonstrate that Tandem transformers can be used within the SPEED framework, improving the efficacy of SPEED. While multiple drafters have been explored in the context of SPEED such as a stand alone model (Leviathan et al., 2023), retrieval based (He et al., 2023), distillation based (Zhou et al., 2023), as of now distillation based drafters seem to perform the best. As we demonstrate in Section 4, Tandem is able to provide significantly more powerful drafter thus providing better draft of tokens leading to lower latency.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
46927717-cd62-4f14-83cf-15cca4f31823
## 3. Tandem Transformers In this section, we will describe tandem transformers architecture, it's training and inference. Standard (decoder) transformer : Given a sequence t1, t2, · · · *, t*S of S tokens as inputs, where ti corresponds to the ith token id, a standard decoder transformer with L layers executes as follows: $$\widetilde{x}_{i}^{(j+1)}=\mbox{Atn}^{(j+1)}(x_{i}^{(j)}|x_{\leq i}^{(j)})$$ $$x_{i}^{(j+1)}=\mbox{FF}^{(j+1)}(\widetilde{x}_{i}^{(j+1)})\qquad\mbox{for}j=0,\cdots,L-1,\tag{1}$$ where x(0) i = Emb(ti) is the embedding of ti, x(j) i is the representation after the jth layer and Atn(j)(·|·) and FF(j)(·) are the jth attention and feedforward layers respectively (Vaswani et al., 2017). Note that the attention is purely causal (i.e., the ith token attends only tokens tk for k ≤ i) since we are considering a decoder-only transformer. Tandem transformer : A Tandem transformer model comprises of a primary model ML and a secondary model MS. Typically, SIZEOF(ML) ≫ SIZEOF(MS). Given a sequence of tokens t1, t2, · · · *, t*S as inputs, the primary model ML processes these tokens just like a standard (decoder) transformer (1). Let γ be the block length parameter, and LS and LL be the number of layers of the secondary model and primary model, respectively. Let ℓ : [LS] → [LL] be a layer assignment function from secondary model to primary model. The secondary model attends to the primary model's representations for all tokens from the previous blocks. More formally, we have: $$\widehat{y}_{i}^{(j)}=\mbox{FF}_{\mbox{Tandem}}^{(j)}\big{
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
05760b3e-224b-4e2e-a474-c1a553ac323a
## 3. Tandem Transformers a standard (decoder) transformer (1). Let γ be the block length parameter, and LS and LL be the number of layers of the secondary model and primary model, respectively. Let ℓ : [LS] → [LL] be a layer assignment function from secondary model to primary model. The secondary model attends to the primary model's representations for all tokens from the previous blocks. More formally, we have: $$\widehat{y}_{i}^{(j)}=\mbox{FF}_{\mbox{Tandem}}^{(j)}\big{(}x_{i}^{(\ell(j))}\big{)}$$ $$\widehat{y}_{i}^{(j+1)}=\mbox{Atn}_{\mbox{S}}^{(j+1)}(y_{i}^{(j)}|\widehat{y}_{S<k}^{(j)},y_{[k+1,i]}^{(j)}\big{)}\mbox{where}k=\lfloor\frac{i}{\gamma}\rfloor*\gamma$$ $$y_{i}^{(j+1)}=\mbox{FF}_{\mbox{S}}^{(j+1)}(\widehat{y}_{i}^{(j+1)})\qquad\mbox{for}j=0,\cdots,L_{S}-1,\tag{2}$$ where x(j) i and y(j) i denote the jth layer representation of the ith token under ML and MS respectively, FF(j) Tandem(·) denotes a feedforward layer that converts the representation x(ℓ(j)) i of the ith token from the ℓ(j)th layer of the primary model, to a representation �y(j) i of the same ith token for the jth layer of the secondary model, and Atn(j) S (·|·) and FF(j) S (·) denote the attention and feedforward blocks respectively in the jth layer of the secondary model MS. The final output of the tandem model is y(LS). We note that the primary and the secondary model can vary in almost all scale parameters such as representation dimensions
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
debf068d-f47a-44a0-a8b9-799824ca862b
## 3. Tandem Transformers ℓ(j)) i of the ith token from the ℓ(j)th layer of the primary model, to a representation �y(j) i of the same ith token for the jth layer of the secondary model, and Atn(j) S (·|·) and FF(j) S (·) denote the attention and feedforward blocks respectively in the jth layer of the secondary model MS. The final output of the tandem model is y(LS). We note that the primary and the secondary model can vary in almost all scale parameters such as representation dimensions, expansion factors of feedforward layers, number of attention heads, etc. as well as whether the attention is multi-head or multi-query, etc. In all of our experiments, we take FF(·) Tandem(j) to be linear projection layers. Training : Given a block length parameter γ, we partition the training sequence into blocks, each consisting of γ consecutive tokens. Consider the autoregressive prediction of the jth token (for some j ≤ γ) within the ith block. The input to the secondary model MS is the previous token. Crucially, within the attention blocks of MS: - Key/value pairs for all tokens up to the jth token in the current block are computed by MS itself. - Key/value pairs for tokens in previous blocks are computed by the primary model ML. A projection/tandem feedforward layer then aligns the representational dimensions from ML to MS, as described in Equation (2). We explore multiple training configurations for Tandem transformers: - **Primary Model Frozen:** Only the secondary model parameters MS and the tandem feedforward layer FF(j) S are updated. Loss is applied solely to the secondary model's output y(LS) (Equation (2)). - Both Models Trained, Loss on Secondary Outputs: Similar to the above, loss is applied to the secondary model's output. However, both ML and MS, along with FF(j) S are trained. - **Both Models Trained, Loss on Both Outputs:** The combined loss incorporates both the primary model's outputs x(LL) and the secondary model's outputs y(LS). For training efficiency, we initialize the primary and secondary models with high quality pretrained checkpoints,
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
af0cf295-03ac-434b-9bb3-b0f2cc7d9cd9
## 3. Tandem Transformers applied solely to the secondary model's output y(LS) (Equation (2)). - Both Models Trained, Loss on Secondary Outputs: Similar to the above, loss is applied to the secondary model's output. However, both ML and MS, along with FF(j) S are trained. - **Both Models Trained, Loss on Both Outputs:** The combined loss incorporates both the primary model's outputs x(LL) and the secondary model's outputs y(LS). For training efficiency, we initialize the primary and secondary models with high quality pretrained checkpoints, and then continue pretraining the tandem architecture for a small number of additional steps. In particular, we use the pretrained PaLM2-Bison and PaLM2-Gecko checkpoints to initialize ML and MS respectively. In this setting, we found that **Primary Model Frozen** approach provides the best accuracy. Our Tandem-CE model is obtained by using cross entropy (CE) loss on the output of the secondary model as described above. Tandem-Distil: To further enhance MS's quality, we apply a distillation loss on its predictions, using the logits of the pretrained ML as targets with CE loss. This aligns naturally with the Tandem architecture, as MS already incorporates representations from ML. The Tandem-Distil model follows a two stage training setup, where initially it is trained to minimize the CE loss with respect to the ground truth labels, and in the second stage a weighing factor of λ = 0.5 is used to balance the CE loss with respect to ground truth labels and the CE logit distillation loss with respect to the outputs of the PaLM2-Bison model. We note that Tandem-Distil in general performs better than Tandem-CE. Inference. The inference process begins with the primary model (ML) processing the prompt and generating representations for all prompt tokens. The secondary model (MS) then autoregressively generates the first block of γ response tokens. Crucially, MS attends to the primary model's representations, aligned via the projection layer. Once the first response block is generated, the primary model (ML) processes these tokens and computes their representations. We consider two inference configurations: - Representation Generation + Token Prediction (Figure 2): ML additionally predicts the next token. - Representation Generation Only (App
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7c9202d7-6b23-4dc9-846a-96964d2029fd
## 3. Tandem Transformers Inference. The inference process begins with the primary model (ML) processing the prompt and generating representations for all prompt tokens. The secondary model (MS) then autoregressively generates the first block of γ response tokens. Crucially, MS attends to the primary model's representations, aligned via the projection layer. Once the first response block is generated, the primary model (ML) processes these tokens and computes their representations. We consider two inference configurations: - Representation Generation + Token Prediction (Figure 2): ML additionally predicts the next token. - Representation Generation Only (Appendix B, Figure 4): ML solely generates representations for the response block. In both configurations, the representations generated by ML are used by the secondary model (MS) to generate the | plains | of | India | |------------|--------------------------------|---------| | of | The Himalayas...separating the | India | | separating | The Himalayas...mountain | the | subsequent block of γ response tokens. Also note that, as in training, MS attends to its own representations for all previous tokens within the current block. To disaggregate query and response generation, we use Representation Generation Only for processing the input query/prefix. However, for subsequent blocks where the prefill (query+generated response till this point) is processed, we use Representation Generation + Token Prediction from ML. Depending on the training protocol - specifically, whether primary model outputs are reliable - we may optionally allow the primary model (ML) to generate the first token of the subsequent block (processing γ + 1 tokens). Crucially, in this scenario, we must ensure the following: the keys and values associated with the next block's first token, computed by ML, are not overwritten when the secondary model (MS) executes its attention layers. Inference-Time Block Length Flexibility. While we train Tandem transformers with a fixed block length γ, the architecture supports arbitrary γ values during inference. Larger γ values generally improve efficiency by maximizing the primary model's (ML) utilization of accelerator hardware.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f9f2f9c6-81f6-4ca0-88d4-9eb005e71be3
## 3. Tandem Transformers primary model (ML) to generate the first token of the subsequent block (processing γ + 1 tokens). Crucially, in this scenario, we must ensure the following: the keys and values associated with the next block's first token, computed by ML, are not overwritten when the secondary model (MS) executes its attention layers. Inference-Time Block Length Flexibility. While we train Tandem transformers with a fixed block length γ, the architecture supports arbitrary γ values during inference. Larger γ values generally improve efficiency by maximizing the primary model's (ML) utilization of accelerator hardware. Although Tandem is trained with a fixed γ, in SPEED evaluations we find that the optimal γ is often much larger, indicating the robustness of Tandem to changes in γ at inference time.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
48f7e6f0-d544-42ea-9161-170e40e2226f
## 3.1. Tandem + Speed: Tandem In The Speculative Decoding Framework SPEED mitigates the inefficiency of autoregressive generation using a smaller drafter/secondary model to generate tokens and a larger verifier/primary model to confirm them. SPEED guarantees output quality matching the verifier, but its efficacy hinges on the drafter's ability to generate long, accurate draft sequences. Tandem transformers are uniquely | from | |--------| | plains | suited for this framework, with our secondary model MS acting as the "drafter" and primary model ML acting as the "verifier". Given a Tandem model, we use ML to process the query/prefix and generate representations for them. MS uses these and produces a draft for the first γ tokens autoregressively. ML then verifies this entire block simultaneously and identifies the first location i where the draft token is deemed incorrect by ML (i = γ + 1, if all the draft tokens are verified successfully). We take the output of the large model for the ith token, and the small model MS then continues to generate draft tokens from the (i+1)th position onwards, while using the representations of all the previous tokens from the large model ML. This process continues until a full response is generated. The above process can be generalized to the setting, where we generate multiple full responses for the same query, we refer to it as num-samples, for example to eventually rank these responses and select the "best" response (Mudgal et al., 2023). In this case, the location of the rejected token can vary across the different samples being generated. Similarly, the above approach generalizes to larger batch sizes as well, when we are simultaneously processing multiple queries together. Practical systems potentially use both num-samples and batch-size to be > 1 but latency gains for Tandem + SPEED depend on overall batch-size which is num-samples × *batch size*. So, for simplicity we focus only on num-samples > 1 and fix batch-size to be 11.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7ab28664-3302-4e84-9627-e505b0ef633f
## Adaptive Block Length: While Standard Speed Uses A 1Note that it is more challenging to obtain latency improvements with increasing num-samples, compared to that in batch size since, even without any of these optimizations such as SPEED etc., larger num-samplesobtain better efficiency on all layers while larger batch size obtains better efficiency only on feedforward and softmax layers, and not the attention layer. PaLM2- Gecko PaLM2- Gecko- Distil Tandem- CE (ours) Tandem- Distil (ours) 55.06 56.50 58.35 58.61 Accuracy (ground truth) 2.14 2.12 1.94 1.99 CE loss (ground truth) Relative accuracy 74.64 75.30 80.00 81.00 Relative TV distance 0.391 0.318 0.178 0.141 fixed block length γ, we introduce an adaptive approach. We train a relatively small 2-layer multi-layer perceptron - *router MLP* - model to predict whether the current draft token from MS is likely to be accepted by the primary model ML. At each timestep, we compare the prediction of this small model to a threshold τ, deciding whether to: a. Verify with ML, or b. Continue drafting with MS. Input features to the router MLP are: MS's entropy over the current token's vocabulary distribution, top-k probabilities for the current token for an appropriate k, and MS's model embeddings corresponding to these top-k most probable tokens. We train the router MLP to predict the probability of disagreement using cross-entropy loss, with ground truth being: TV (yS j , yP j ), where TV (yS j , yP j ) is the total variation (TV) distance between the output logits of MS and ML for the jth token.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
51496d8e-b061-428d-8207-33f9f7a49cca
## 4. Experiments In this section, we present experimental results evaluating Tandem transformer models. Except for the new architecture of Tandem transformers, we generally follow the same training protocols as described in (Anil et al., 2023), including the training dataset, optimizer, etc. Further Training Details. For both Tandem-CE and Tandem-Distil, we initialize the secondary model MS to be the pretrained PaLM2-Gecko, while freezing primary model ML to be the pretrained PaLM2-Bison (Anil et al., 2023). The projection/Tandem feedforward layers are chosen to be linear layers and initialized randomly. Both the Tandem models - Tandem-CE and Tandem-Distil– are trained with a block length of γ = 2. For our evaluation within the SPEED framework, we consider a logit distillation version of PaLM2-Gecko, called PaLM2-Gecko-Distil, which is initialized with the PaLM2-Gecko model and then trained using logit distillation, similar to the second phase of training of the Tandem-Distil model, since distillation has been shown to help improve the secondary models in SPEED (Zhou et al., 2023). Adaptive block length in SPEED. We train a small, 2-layer MLP model to predict whether the current drafter token from MS is likely to be accepted by primary model ML. We set τ = 0.8 as the threshold to determine if MS can continue generating more tokens.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7b2ef954-ff98-48ec-9084-b77b67e7c8ce
## 4.1. Performance Evaluation We compare the performance of Tandem-CE and Tandem- Distil against PaLM2-Gecko, PaLM2-Gecko-Distil, PaLM2- Otter and PaLM2-Bison on several downstream tasks as well as in terms of latency. For downstream task evaluation, we compare on Super- GLUE (Wang et al., 2019), TydiQA (Clark et al., 2020), a large collection of generation tasks, which we call Gen-tasks (comprising of SQuADv2 (Rajpurkar et al., 2018), Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), WebQuestions (Berant et al., 2013) and Lambada (Paperno et al., 2016)), MBPP (Austin et al., 2021), and WMT22 (Zerva et al., 2022). WMT22 results are averaged over x → en translations for different languages x. For TydiQA, we pass the gold passage as part of the input, and report the average F1-score over all languages. For Super- GLUE and Gen-tasks, we follow the experimental settings as described in (Anil et al., 2023) and report the average results. We report 1-shot evaluations for all performance evaluation experiments.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f3dfcaa0-6428-41a7-93d6-b7822dbd3ba5
## 4.2. Latency Evaluation We perform latency evaluation in two different settings. In the first setting, we use Tandem-CE and Tandem-Distil as secondary models within SPEED, with PaLM2-Bison as the primary model. Note that the SPEED framework guarantees that the outputs will be of the same quality as the primary model PaLM2-Bison. For comparison, we use PaLM2-Bison as a stand alone model, as well as SPEED with PaLM2-Bison as primary and PaLM2-Gecko- Distil as secondary as our baselines. In the second setting, we evaluate the latency of Tandem-CE and Tandem-Distil as stand alone models with PaLM2-Gecko, PaLM2-Otter and PaLM2-Bison. All the evaluations are performed on TPUv5e (Cloud). We evaluate latency on the test sets of CNNDailyMail (Hermann et al., 2015), and Reddit Posts summarization (Kim et al., 2018), and 1000 prompts from the 1 Billion Word | Dataset | Num- | |-----------------|---------------| | PaLM2-Gecko- | | | Distil | | | Tandem-Distil | Tandem-Distil | | Samples | (baseline) | | (ours; relative | | | gain) | | | Reddit | | | 1 | | | × | | | 2.169 (
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3a89e24d-2ed9-4345-9b58-80d092fc3422
## 4.2. Latency Evaluation | | | Reddit | | | 1 | | | × | | | 2.169 ( | | | γ | = 7 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | 4 | | | × | | | 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
81e18d80-dd37-4342-a744-61bc89e8f93c
## 4.2. Latency Evaluation | . | | × | | | 4 | | | × | | | 1.919 ( | | | γ | = 5 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | CNN/DailyMail | 1 | | × | | | 2.219 (
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
de74c8dd-50e2-4a71-8296-7d6c43a64e47
## 4.2. Latency Evaluation | 1 | . | | × | | | CNN/DailyMail | 1 | | × | | | 2.219 ( | | | γ | = 7 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | 4 | | | × | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f15b5b50-bb00-4997-a770-7d99d6d5d759
## 4.2. Latency Evaluation | . | | × | | | 4 | | | × | | | 1.940 ( | | | γ | = 5 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | LM1B | | | 1 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
99257c0b-57ce-4f61-bb1e-70ae66d8a760
## 4.2. Latency Evaluation | 1 | . | | × | | | LM1B | | | 1 | | | × | | | 2.348 ( | | | γ | = 7 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | 4 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3fbb17dd-6ed3-4abd-a956-830454266bc1
## 4.2. Latency Evaluation | | ) | | | 1 | . | | × | | | 4 | | | × | | | 2.011 ( | | | γ | = 5 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | Dataset PaLM2- Gecko PaLM2- Otter PaLM
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
999712a4-0a5f-4a32-96c8-c8579cea7b42
## 4.2. Latency Evaluation | | ) | | | 1 | . | | × | | Dataset PaLM2- Gecko PaLM2- Otter PaLM2- Bison Tandem- CE (ours) Tandem- Distil (ours) Generativetasks 28.8 37.1 44.0 51.1 57.5 MBPP 4.8 13.8 21.2 20.8 30.4 35.1 37.4 44.1 48.4 50.5 WMT22- 1shot-tononenglish TydiQA- GoldP 55.0 65.7 69.0 69.7 73.4 Super- GLUE 62.8 78.5 78.8 79.0 81.5 6.397× 2.744× 2.744× 2.359× 1× Speedup over PaLM2- Bison row demonstrate that the Tandem model is about 1.16x faster than the PaLM2-Otter model. Benchmark (Chelba et al., 2014). We report latency results for both num-samples = 1 as well as 4.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
099f2b11-ceee-44a5-b6fd-c96c6ccdf99d
## 4.3. Evaluation Results We now present results of our evaluation of tandem transformers. Pretraining metrics : Table 1 presents a comparison of accuracy and cross entropy (CE) loss of various baselines as well as tandem models, with respect to both the ground truth labels as well as the primary model ML's predictions. As we can see, tandem transformers performs better than logit distillation, while combining logit distillation with tandem transformers, further improves its performance. Latency within SPEED : Table 2 presents results on the latency of Tandem transformers within the SPEED framework. Specifically, we compare the speedup obtained over the PaLM2-Bison model, by using SPEED with PaLM2- Gecko-Distil as the secondary model vs Tandem-Distil as the secondary model. The results clearly demonstrate the improvements obtained by tandem on top of distillation. Table 8 in Appendix A presents the speedups computed only over the decode time (i.e., excluding the query processing time). Note that since the SPEED framework guarantees that the outputs are of same quality as those of the primary model, PaLM2-Bison, the latency improvements given by the tandem model do not have any quality tradeoffs. Evaluation as a standalone model : We evaluate the Tandem model as a stand alone model in its own right. Table 3 presents a comparison of both downstream evaluations on standard downstream benchmarks, as well as latency evaluations. As can be seen, the Tandem model substantially improves upon the downstream performance of the Dataset speedup over PaLM- Bison speedup over Tandem-Distil + SPEED Reddit 2.582× (γmax = 17) 1.045× CNN/ 2.599× (γmax = 17) 1.051× DailyMail LM1B 2.853× (γmax = 27) 1.093× baseline model, and is almost on par with the PaLM2-Otter model. Detailed results presented in Tables 10 and 11 in Appendix A show that, in some cases, the tandem model is closer to the PaLM2-Bison model itself. At the same time, the tandem model is about 1.16x times faster compared to the PaLM2-Otter model, making it a compelling candidate for stand alone deployment as well.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a4dcb667-ea2c-424a-bf39-6349575b061c
## 4.3. Evaluation Results 1.051× DailyMail LM1B 2.853× (γmax = 27) 1.093× baseline model, and is almost on par with the PaLM2-Otter model. Detailed results presented in Tables 10 and 11 in Appendix A show that, in some cases, the tandem model is closer to the PaLM2-Bison model itself. At the same time, the tandem model is about 1.16x times faster compared to the PaLM2-Otter model, making it a compelling candidate for stand alone deployment as well. Adaptive block length : We now present a way to improve the performance of SPEED with adaptive block lengths (Adaptive γ or AG), where after every token predicted by the secondary model, we use a small, inexpensive router to determine whether to continue predicting with the secondary model, or verify the tokens generated so far with the primary model. Table 4 presents the speedup obtained by Tandem-Distil + SPEED + AG compared with the PaLM2-Bison model as well as the Tandem-Distil + SPEED model. Table 9 in Appendix A presents the speedup as measured only over the decode component of the latency i.e., excluding query processing time. In Table 5, we present the number of primary model, and secondary model runs for Tandem-Distil + SPEED and Tandem-Distil + SPEED + Adaptive γ. The results put forth the benefits of using an adaptive block length, since it drastically reduces the number of secondary model runs while slightly increasing the number of primary model runs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9e3f8ac5-4a5d-4ca6-b2d7-9d4200ab3bbc
## 5. Deep Tandem Transformers In tandem transformers, we used the large model ML to process tokens in blocks, so that the small model MS can use large model's representations for all the tokens from previous blocks. In this section, we present a different approach to use ML and MS in tandem, where ML predicts Tandem- Distil Tandem- Distil + AG Primary model runs 51.53 54.67 Secondary model runs 360.73 271.63 a sketch of the next block of tokens in parallel, while MS does the actual sampling in an autoregressive manner. More concretely, we have: $$\widetilde{x}_{i}^{(j+1)}=\text{Atn}_{\text{L}}^{(j+1)}(x_{i}^{(j)}|x_{\leq k*\gamma}^{(j)})\text{where}k=\lceil\frac{i-\gamma}{\gamma}\rceil$$ $$x_{i}^{(j+1)}=\text{FF}_{\text{L}}^{(j+1)}(\widetilde{x}_{i}^{(j+1)})\qquad\text{for}j=0,\cdots,L_{L}-1,\tag{3}$$ and $x_{i}^{(0)}=\text{Emb}_{\text{L}}(x\left[i-\gamma\right])$ is given by the large model's embedding of the � ⌈ i−γ γ ⌉ ∗ γ �th token, where the large model, given all tokens x(0) 1 , · · · , x(0) s∗γ, produces a draft of the next γ tokens x(LL) k∗γ+1, · · · , x(LL) (k+1)∗γ. We then add the previous token representations to these sketches and then pass it through the small model, which predicts the next token autoregressively: $$y_{i}^{(0)}=\text{Emb}_{\text{S}}(x\left[i
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da13d3a0-67f0-4d9d-acb6-5df1e6439857
## 5. Deep Tandem Transformers , given all tokens x(0) 1 , · · · , x(0) s∗γ, produces a draft of the next γ tokens x(LL) k∗γ+1, · · · , x(LL) (k+1)∗γ. We then add the previous token representations to these sketches and then pass it through the small model, which predicts the next token autoregressively: $$y_{i}^{(0)}=\text{Emb}_{\text{S}}(x\left[i-1\right])+\text{FF}_{\text{Tandem}}(x_{i}^{(L_{L})})$$ $$\tilde{y}_{i}^{(j+1)}=\text{Atn}_{\text{S}}^{(j+1)}(y_{i}^{(j)}|y_{\leq i}^{(j)})$$ $$y_{i}^{(j+1)}=\text{FF}_{\text{S}}^{(j+1)}(\tilde{y}_{i}^{(j+1)})\qquad\text{for}j=0,\cdots,L_{S}-1.\tag{4}$$ The eventual output of the model is $y_{i}^{(L_{S})}$ which is its prediction of the $i^{\text{th}}$ token in the input sequence. This is pictorially depicted in Figure 3.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bcc459ac-9120-4fd3-8b04-f400f93a13bd
## 5.1. Experimental Results For Deep Tandem Transformers In this section, we present preliminary experimental results on deep tandem transformers compared with the standard architecture. For this section, we consider the LaMDA models along with the training protocol as described in (Thoppilan et al., 2022). In particular, we consider the 1B parameter model from the LaMDA family and construct a deep tandem version of it by splitting the 16 layers equally between ML and MS (so each of them has 8 layers), and with block length γ = 2. The results, presented in Table 6 suggest that we suffer minimal loss by letting the autoregressive component be only a part of the entire model. | | Topline | Tandem | |----------|-----------|----------| | Accuracy | 45.1 | 43.3 | | CE loss | 2.71 | 2.85 | | Speedup | | | | estimate | | | | 1 | | | | × | | | | 1 | . | 25 | | × | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4994e5de-d02c-4721-9a64-ee69cb6a3eb8
## 5.2. Importance Of The Small Autoregressive Component In this section we present the log perplexity achieved by a block prediction model similar to (Stern et al., 2018), where we predict the next block of γ = 8 tokens simultaneously. In other words, we directly train the output x(LL) i of the large model in Equation (3) to predict the ith token x [i]. The CE loss of the resulting model, and its comparison with a fully autoregressive model is presented in Table 7. As we can see, the cross entropy loss of such a model is much higher compared to that of the original model, which is fully autoregressive. Autoregressive 8-Block prediction 2.71 5.55
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
47184328-2f6f-436c-9630-db811fe66cfd
## 6. Conclusions And Discussion In this work, we introduce a novel architecture, Tandem transformers, which combines a small autoregressive model with a large model operating in block mode. Tandem transformers substantially boost the small model's predictive accuracy by allowing it to attend to representations from the large model. In our experiments, a Tandem model comprising of PaLM2-Bison and PaLM2-Gecko substantially improves over a standalone PaLM2-Gecko, and gives comparable performance to the PaLM2-Otter model, while being 1.16× faster than the PaLM2-Otter model. When used within the SPEED setup as a secondary model, the distilled Tandem PaLM2-Gecko model gives around 1.14× speedup over a distilled PaLM2-Gecko model. We further improve our Tandem model through an adaptive block length procedure in SPEED and obtain around 1.22× speedup over using PaLM2-Gecko-Distil as the secondary model.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
386c98a9-7ee4-4688-a23d-72ac7c1e6fad
## Limitations And Future Directions - **Other variants of tandem**: In our current approach, we use the large model only through its representations of the past tokens. Is it possible to use the large model to also generate a plan for the future γ *tokens* along the lines of deep tandem transformers? - **Alternative to LoRA for finetuning**: The current approach for finetuning a base model for multiple downstream applications is through low rank adaptation (LoRA) (Hu et al., 2021). It will be interesting to explore whether tandem with block length 0 can be an effective alternative to LoRA, while reducing the training cost substantially since backpropagation needs to be done only for the small model. - Adaptive γ for larger num-samples/batch-size: While we see promising results with adaptive γ in SPEED for num samples 1, extending it to larger num samples seems challenging. Identifying an effective way of determining when to continue generating with small model vs verifying with large model, in the larger num samples setting, is also an interesting direction of future work. - **Smaller drafter models in SPEED**: Finally, we hope that tandem can enable using even smaller drafter models in SPEED, compared to the ones currently being pursued, leading to both memory as well as latency improvements.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4f8dbef8-63e4-4278-ac2b-4ed4efe48bfa
## 7. Broader Impact Statement Our work provides a more computationally efficient large language model inference solution, which we hope can bring down carbon emissions associated with LLM inference. It also helps with easier deployment of LLMs, which could have potential societal consequences, that seem difficult to predict.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7c195bdc-be0b-4c63-98f8-874beff04ef5
## A. Additional Results In this section, we will present additional experimental results.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
87c1480c-965a-46c6-be4b-7606966bb2fe
## A.1. Decode Time Results In Tables 8 and 9, we compare the decode time results (i.e. end-to-end time − time required to process the input prefix) of our Tandem model and its Adaptive γ variant with the baselines. | Dataset | Num- | |-----------------|---------------| | PaLM2-Gecko- | | | Distil | | | Tandem-Distil | Tandem-Distil | | Samples | (baseline) | | (ours; relative | | | gain) | | | Reddit | | | 1 | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 2 | . | | × |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f550b7ba-5f2a-4fee-ac95-c5535a446f42
## A.1. Decode Time Results | | γ | = 7 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | 4 | | | 2 | . | | × | | | ( | | | γ | = 5 | | ) | | | 2 | .
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }