doc_id
stringlengths 36
36
| contents
stringlengths 22
3.25k
| metadata
dict |
---|---|---|
6c1aba44-e643-423b-b99d-27e55621aa55 | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
| | | |
| 0.12 | | | |
| \n\n\n ( | | | |
| TripleSkip | | | |
| ) | | | |
| 71.7 | | | |
| Β± | | | |
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d798ce24-b096-469d-ab47-72e11fdfdff3 | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
| | |
| Β± | | | |
| 0.26 | | | |
| 49.3 | | | |
| Β± | | | |
| 0.19 | | | |
| 77.4 | | | |
| Β± | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
37903df2-3c23-47cd-8338-388575cfd4c2 | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
| |
| 77.4 | | | |
| Β± | | | |
| 0.16 | | | |
| C | | | |
| O | | | |
| T-S | | | |
| EP | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ed00da57-a361-4bd9-8cc9-c040f5783d39 | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
| |
| T-S | | | |
| EP | | | |
| (Unit : Sentence) | | | |
| \n | 69.9 | | |
| Β± | | | |
| 0.26 | | | |
| 44.5 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a5b6db8d-5f5a-4525-a79f-0fc8ed965d23 | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
|
| 0.26 | | | |
| 44.5 | | | |
| Β± | | | |
| 0.33 | | | |
| 75.9 | | | |
| Β± | | | |
| 0.12 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8523f5e5-f5f3-4e7d-80da-c817c55c5b3a | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
|
| Β± | | | |
| 0.12 | | | |
| \n\n | 70.2 | | |
| Β± | | | |
| 0.22 | | | |
| 44.2 | | | |
| Β± | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
492ab2be-fb78-4d7d-9caa-b06af5445abb | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
|
| 44.2 | | | |
| Β± | | | |
| 0.19 | | | |
| 74.5 | | | |
| Β± | | | |
| 0.12 | | | |
| \n\n\n | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
89c1fb49-c327-4fe5-950d-2ba965d8ee18 | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
.12 | | | |
| \n\n\n | | | |
| 70.8 | | | |
| Β± | | | |
| 0.17 | | | |
| 45.8 | | | |
| Β± | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2e904c83-eea9-41aa-b7f4-71b177920ac1 | # Can Separators Improve Chain-Of-Thought Prompting?
## 4.2 Results
| | | |
| Β± | | | |
| 1.00 | | | |
| 75.2 | | | |
| Β± | | | |
| 0.22 | | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3229ae25-2a7e-4add-85a9-330fcab7e62b | # Can Separators Improve Chain-Of-Thought Prompting?
## 5 Conclusion
In this paper, we propose COT-SEP, a novel method placing separators in a structured format to improve Chain-of-Thought (CoT) prompting popular in large language models. Within this framework, we place separators at the end of each prompt exemplar for better readability for LLMs.
We demonstrate that COT-SEP effectively improves CoT prompting in various LLMs across highly complex arithmetic and commonsense benchmarks through multiple experiments employing different separators and structural variations. While our experimental results are only focused on the CoT prompting scenarios, our idea of adding separators within the prompt can be easily plugged into various existing prompting techniques, such as Generated Knowledge Prompting (Liu et al., 2021), Self- Consistency (Wang et al., 2023b), Tree of Thought
(Yao et al., 2023; Long, 2023) and GraphPrompt
(Liu et al., 2023). We look forward to applying our approach to different prompting strategies in a near future. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e54cc226-8ded-430b-9b48-4d236346743e | # Can Separators Improve Chain-Of-Thought Prompting?
## 6 Limitations
Our research relies on CoT prompting (Wei et al., 2022b), which is based on in-context learning. Given CoT prompting, the model outputs a final answer along with a step-by-step reasoning process, allowing the generated CoT to validate the final answer. This process may cause users to excessively trust LLMs, which is critical, as LLMs may often generate incorrect outputs. This may ultimately lead to automation bias. Therefore, users should be aware of the need not to overly rely on LLMs for solving reasoning tasks or to make all decisions, despite receiving aid from them.
For our CoT-Sep framework, we utilize various separators, TripleSkip (\n\n\n), TripleHash
(###), TripleStar (***), <br>, and <br/>. However, as shown in our various experiments, the results of the LLMs depend greatly on where and how the separators are located within the prompts.
Therefore, it is recommended to cautiously place separators in the precise location stated in our framework, otherwise, the accuracy of the outcome may be diminished.
Due to the limited access to computational resources, we conducted a single trial using Meta's Llama-2-7b model, while carrying out three trials with OpenAI's GPT models. For the same reason, we only focus on a few datasets to validate our method. We tested on two arithmetic reasoning datasets, GSM8K (Cobbe et al., 2021) and AQuA
(Ling et al., 2017), as they are the most challenging arithmetic reasoning benchmarks available, and had the least impressive performance while using vanilla CoT prompting. For validation on another reasoning task, we selected the CSQA (Talmor et al., 2019) commonsense reasoning benchmark, as it is one of the most difficult commonsense reasoning benchmarks. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
84ff3b4e-800e-4d74-9454-f774aa775e7b | # Can Separators Improve Chain-Of-Thought Prompting?
## Ethics Statement
We foresee no ethical concerns that may arise from the findings of this work. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1f11f08a-3f00-43a5-92cd-5fd37e89584f | # Can Separators Improve Chain-Of-Thought Prompting?
## A Experiment Details For Difference In Separator Location In Cot-Sep
In this section, we explain our experiment shown in Table 3 in more detail. We provide Fig. 3, which is a detailed figure of Fig. 2 with two exemplars. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1c35cdf3-1c02-440d-8354-ad0253b2d3e5 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
Our prompts are primarily based on the vanilla CoT prompts provided by (Wei et al., 2022b). Full prompts for our experimental setting are included in Table 4 - 24.
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4839d49a-f5ce-4b27-918d-1f49c69395b9 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
= 9. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8.
Table 5: Full prompt for COT-SEP (TripleSkip) used in our experiments on the GSM8K Arithmetic Reasoning Benchmark.
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. \n \n \n Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. \n \n \n Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. \n \n \n Q: Jason had 20 lollipops | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
31e9ea90-e3f1-47fd-a470-28fce18fdc99 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
cars are in the parking lot?
A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. \n \n \n Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. \n \n \n Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8.
\n \n \n Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. \n \n \n Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. \n \n \n Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. \n \n \n Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c764d70d-4603-4eaf-9ae7-fdd201adf634 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. \n \n \n Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. \n \n \n Table 6: Full prompt for COT-SEP (TripleStar) used in our experiments on the GSM8K Arithmetic Reasoning Benchmark.
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. ***
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. ***
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. *** Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8. ***
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d27b9e56-c68a-48ec-a2ff-001f6fb125cd | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
answer is 39. *** Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8. ***
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. *** Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. *** Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. ***
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. *** Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. ###
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5323c81a-8b96-4bf8-8ae2-81c4d680d614 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
is 8. The answer is 8. *** Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. ###
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. ###
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. ### Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8. ###
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. ### Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. ### Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5fa5f73f-fe45-44cb-91db-e931d6b02390 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. ### Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. ###
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. ###
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. <br>
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. <br>
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. <br> Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d1d53455-14df-45e0-a779-a9ff8af698e5 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. <br> Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8. <br>
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. <br> Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. <br>
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. <br> Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f028bb9c-a097-48f9-b50f-4b97dc562ea5 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. <br> Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. <br/>
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. <br/>
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.<br/> Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8. <br/>
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br/> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. < | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0457c02f-9155-4cc9-8cbc-563bdc33effd | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br/> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. <br/> Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. <br/>
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. <br/>
Table 10: Full prompt for Heterogeneous COT-SEP used in our experiments on the GSM8K Arithmetic Reasoning Benchmark.
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. \n \n \n Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. ###
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b069aa2d-5137-43c1-aa9a-374bf68b00b8 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. \n \n \n Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. ###
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating
35, they had 74 - 35 = 39. The answer is 39. *** Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 -
12 = 8. The answer is 8. <br>
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br/> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. \n \n \n Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. ###
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fcecc54a-e091-403a-93c3-68858d577701 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
29. The answer is 29. \n \n \n Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. ###
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. *** Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a).
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
33912400-c602-461a-b00f-42019a32d2e3 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
equal to 3/2. The answer is (b).
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b).
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a).
\n \n \n Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b).
\n \n \n Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3411a110-c71d-4c6a-9c8f-40a3896df290 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b).
\n \n \n Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). \n \n \n Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b).
\n \n \n Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). ***
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b). ***
Q: A person is traveling at 20 km/hr and reached | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
40270e4c-2aa4-47d5-8578-dad01b566cda | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b). ***
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). *** Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). ***
Benchmark.
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). ###
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eb41ee3c-92de-4612-b9fd-4b8621527aa1 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
numbers also increases by 10. So the new mean would be 50. The answer is (a). ###
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b). ###
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). ### Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). ###
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br>
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4530b2d8-ab95-4398-8e72-ff0fb1b7ec5c | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
ices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br>
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b). <br>
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e).
<br>
Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br>
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br/>
Q: If a / b = 3/4 and 8a + 5b = 22 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6a43880d-6cb6-439b-b65b-740d31746c21 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
+ 401(3) = 1392. The answer is (b). <br>
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br/>
Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b). <br/>
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e).
<br/>
Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br/>
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
857aa648-56ce-482e-b739-65d4283892fd | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br/>
Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a).
\n \n \n Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2
(e) 7/2
A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means
44a / 3 = 22. So a is equal to 3/2. The answer is (b). ###
Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). *** Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br>
Q: What do people use to absorb extra ink from a fountain pen | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1330c9e7-484f-4289-a9a9-bf0a2f33b42f | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
*** Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b)
1392 (c) 1480 (d) 1562 (e) 1788
A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are
401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br>
Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e).
Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c).
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
16e7d2d1-fcba-4084-8040-913c0b368fd8 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c).
Benchmark.
Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). \n
\n \n Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c).
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). \n \n | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
55e633cd-94d8-41b2-b7e2-c5819f12fded | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c).
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). \n \n \n Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). \n \n \n Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b).
\n \n
\n Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). \n \n \n Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). \n \n \n Table 20: Full prompt for COT-SEP (TripleStar) used in our experiments on the CSQA Commonsense | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b7028dc5-ad11-4b19-a831-a69d6b1cf897 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
directions. So the answer is (d). \n \n \n Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). \n \n \n Table 20: Full prompt for COT-SEP (TripleStar) used in our experiments on the CSQA Commonsense Reasoning Benchmark.
Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). *** Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c). ***
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). *** Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). *** Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b3d3d2fa-91de-451f-8061-4b7146ff34f0 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
(c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). *** Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). *** Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). *** Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). *** Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). ###
Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c). ###
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0ff1ac43-8678-4d80-b01a-52807566b6c6 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c). ###
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). ### Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). ### Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). ### Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). ### Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). ###
Table 22: Full prompt for COT-SEP (<br>) used in our experiments on the CSQA Commonsense Reasoning Benchmark.
Q: What do people use to absorb extra ink from a fountain | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
413a03cd-b7d4-4530-8377-af1831a74c29 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). ###
Table 22: Full prompt for COT-SEP (<br>) used in our experiments on the CSQA Commonsense Reasoning Benchmark.
Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). <br> Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c). <br>
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). <br> Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br> Q: Google Maps and other highway | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5382c821-7376-4a9b-973c-1f4ae6ded9a0 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br> Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). <br> Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). <br> Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). <br/> Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c). <br/>
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
59f91cfa-7773-45b9-88b7-cdf0aa0fca05 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c). <br/>
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). <br/> Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br/> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br/> Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d).
<br/>
Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). <br/> Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) ink | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c4d9e31e-6629-44c3-9a2c-46eb8165ad1a | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
).
<br/>
Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). <br/> Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). \n \n \n Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation
(c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is
(c). ###
Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). *** Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br/>
Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico ( | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fae1519b-fc0a-4a88-9724-b53856cee2d2 | # Can Separators Improve Chain-Of-Thought Prompting?
## B Full List Of Prompt Exemplars
. So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart
(c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br/>
Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). \n \n \n Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). ### | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10645v1.md",
"file_path": "paper_data/2402.10645v1.md",
"file_size": 84926,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d53101b7-f46b-4019-99da-d4de02e4a659 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
Tianyi Qiu * Β§ 1 **Fanzhi Zeng** * 1 2 **Jiaming Ji** * 1 **Dong Yan** * 3 Kaile Wang 1 Jiayi Zhou 1 **Han Yang** 1
Josef Dai 1 Xuehai Pan 1 **Yaodong Yang** 1 β | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0082b879-0783-4fa1-a547-6986843053e9 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Abstract
There is a trilemma in reinforcement learning from human feedback (RLHF): the incompatibility between highly diverse contexts, low labeling cost, and reliable alignment performance. Here we aim to mitigate such incompatibility through the design of dataset information structures during reward modeling. Specifically, we first reexamine the RLHF process and propose a theoretical framework portraying it as an autoencoding process over text distributions. Our framework formalizes the RLHF objective of ensuring distributional consistency between human preference and large language model (LLM) behavior. Building on this framework, we then systematically investigate the performance impact of information structure in the reward modeling stage of RLHF. To further understand reward generalization in the reward modeling stage, we introduce a new method based on random graph theory that models generalization in the semantic space. A key insight of our analysis is the superiority of the tree-based information structure in reward modeling, compared to chain-based baselines adopted by conventional RLHF methods. We derive that under highly complex contexts with limited data, the tree-based reward model (RM) induces up to
Ξ(log π/log log π) times less variance than chainbased RM where π is the dataset size. To validate our theoretical contribution, we demonstrate that on three different NLP tasks, the tree-based RM
achieves 65% win rate on average against chainbased baselines. Looking forward, we hope our framework can serve as a step towards understanding goal misgeneralization. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b74b77e7-a707-4a3c-b4cc-69ca2654935d | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 1 Introduction
After training on massive datasets, large language models
(LLMs) have displayed remarkably general capabilities. Particularly in specific downstream tasks, these models have reached or even exceeded human expert performance (OpenAI, 2023; Yang et al., 2023a; Bai et al., 2023). However, the training process of LLMs faces several issues. One issue is that these models are trained using vast amounts of text data scraped from the internet. Such data spans various domains and specialties, often containing noise, errors, and social biases (Together Computer, 2023; Ji et al., 2023a).
Another issue is that LLMs are primarily trained to perform next-token prediction (Touvron et al., 2023), which can result in model behaviors that are unintended and potentially harmful. Therefore, it is crucial to align LLMs with human intentions and values to ensure the safety and trustworthiness of these systems (Ji et al., 2023b).
A class of existing methods align LLMs using reward models (RM), trained on human-annotated preference data to represent human preferences. The most notable method within this class, Reinforcement Learning from Human Feedback
(RLHF), employs reinforcement learning (RL) to improve determines the combinatorial structure that the pairs (π¦π΄, π¦π΅) will follow. Thm. 5.12 and Thm. 5.13, summarized in Table 1, are the key results of this study.
the model's responses as judged by the reward model, and balances model optimization with original model fidelity using KL divergence constraints (Christiano et al., 2017; Ziegler et al., 2019; Ouyang et al., 2022; Bai et al., 2022a). RLHF is criticized for its lack of scalability to super-human models (Casper et al., 2023; Burns et al., 2023), but even for current models, RLHF still faces a trilemma: the incompatibility between high task diversity, low labeling cost, and reliable alignment performance (Casper et al., 2023). Some methods, most notably Direct Policy Optimization
(DPO) (Rafailov et al., 2023), bypass the reward model using binary cross-entropy for simpler preference learning, and thereby reduce computation costs. Reinforcement Learning from AI Feedback (RLAIF) (Bai et al., 2022b; Lee et al., 2023) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c6c18e9f-9a27-4865-bd74-c03d9a1e606c | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 1 Introduction
Casper et al., 2023; Burns et al., 2023), but even for current models, RLHF still faces a trilemma: the incompatibility between high task diversity, low labeling cost, and reliable alignment performance (Casper et al., 2023). Some methods, most notably Direct Policy Optimization
(DPO) (Rafailov et al., 2023), bypass the reward model using binary cross-entropy for simpler preference learning, and thereby reduce computation costs. Reinforcement Learning from AI Feedback (RLAIF) (Bai et al., 2022b; Lee et al., 2023) utilizes AI annotation to reduce annotation costs while maintaining consistency with actual human preferences. These alternative approaches remain constrained by the trilemma above, but all delve into an examination of the preference dataset. That inspires us to characterize the role of the preference dataset's information structure in RLHF from a theoretical perspective, while experimentally validating the efficacy of our theoretically inspired insights.
Building upon existing literature, we make the following contributions to the fields of alignment and machine learning theory.
- We formalize RLHF as an autoencoding process (Figure 2), and prove a criterion of convergence for this process (Theorem 4.1), stating that under successful reward generalization, both the RM and the post-RLHF LLM converge upon their respective ideal human counterparts. Note that this framework is not contingent on assumptions about information structures, which allows it to be generally applicable.
- We propose the induced Bayesian network (IBN, Definition 5.3) for the characterization and analysis of generalization in reward modeling. Drawing from random graph theory and causal analysis, the IBN approach enables empirically grounded analysis of reward generalization, and can derive meaningful bounds (Table
1) without overly strong assumptions on the hypothesis space. Our methods also represent a step towards fully understanding the goal misgeneralization problem (Di Langosco et al., 2022; Shah et al., 2022) in alignment.
- We analyze the impact of information structures in
RLHF using the IBN method, and, based on this analysis, propose a novel tree-based method for reward modeling. We both formally derive (Theorem 5.12, Theorem 5.13) and experimentally demonstrate (Section 6) the superiority of the tree-based method in diverse contexts with limited data. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
75cae053-d2ca-4847-b3a4-5e64e2a71696 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 2 Related Work
RLHF and Alignment Alignment is an area of machine learning research that focuses on ensuring AI systems behave in accordance with human intentions and values (Ji et al., 2023b). RLHF (Christiano et al., 2017; Ouyang et al., 2022; Bai et al., 2022a) is an alignment algorithm that extends Preference-based Reinforcement Learning (Wirth et al., 2017) to align models with human preferences. In the present study, we focus on its application to LLMs. RLHF achieves alignment through RL algorithms that train the policy model (*i.e.,* LLMs) to maximize the cumulative reward from a reward model. Some recent methods aim to streamline RLHF by minimizing (Yuan et al., 2023; Gulcehre et al., 2023) or entirely removing (Rafailov et al., 2023) the reliance on reward models. Concurrently, other research efforts, including those by Bai et al. (2022b) and Lee et al. (2023), focus on using AIs for data annotation to reduce costs. Additionally, there is a drive to refine reward models (Wu et al., 2023), which treat different error rewards as binary classification problems.
Generalization in Alignment Di Langosco et al. (2022);
Shah et al. (2022) outline the goal misgeneralization problem in RL. Investigating the goal misgeneralization directly in LLMs is challenging, and to the best of our knowledge, there is currently limited related work in this area. Xiong et al. (2024) gives a detailed description of generalization under the strong assumption of linear reward. Under our autoencoding framework of RLHF, we introduce the IBN method to analyze reward generalization in an empirically grounded manner, thus filling a gap within the literature. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
31714785-987c-4fce-afea-771a008f928c | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 3 Preliminaries
We start by introducing the prerequisite concepts.
Large Language Models The task of LLM generation can be defined with (X, Y, οΏ½, πLM(Β· | Β· ; πLM)). We consider an LLM to be parameterized by πLM and denoted by the output distribution πLM(Β· | Β·). The input space (prompt space) is X β οΏ½β€πmax and the output space is Y β οΏ½β€πmax, for some constant πmax. The model takes as input a sequence x = (π₯0, Β· Β· Β· , π₯πβ1), *aka* prompt, to generate the corresponding output (aka response) y = (π¦0, Β· Β· Β· *, π¦*πβ1).
π₯π and π¦ π represent individual tokens from a predetermined vocabulary οΏ½.
The autoregressive language model $p_{\text{LM}}$ sequentially generates tokens for a given position by relying solely on the sequence of tokens it has previously generated. Consequently, this model can be conceptualized as a Markov decision process, wherein the conditional probability $p_{\text{LM}}(\textbf{y}\mid\textbf{x})$ can be defined through a decomposition as follows.
$$p_{\text{LM}}(y_{0}...m-1\mid\textbf{x})=\prod_{0\leq k\leq m}p_{\text{LM}}(y_{k}\mid\textbf{x},y_{0}...k-1)$$
The RLHF Pipeline Using the notations above, we review the RLHF pipeline from Ziegler et al. (2019); Ouyang et al.
(2022). It typically consists of three stages.
- *Supervised Fine-tuning (SFT).* RLHF begins with a pretrained language model, which is then fine-tuned via supervised learning, especially using maximum likelihood estimation, on a high-quality human instruction dataset designed for downstream tasks. This process
results in a model πSFT(Β· | Β· ; πSFT).
- Collect | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6608281b-19ec-4970-9779-2d992c4b6414 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 3 Preliminaries
HF Pipeline Using the notations above, we review the RLHF pipeline from Ziegler et al. (2019); Ouyang et al.
(2022). It typically consists of three stages.
- *Supervised Fine-tuning (SFT).* RLHF begins with a pretrained language model, which is then fine-tuned via supervised learning, especially using maximum likelihood estimation, on a high-quality human instruction dataset designed for downstream tasks. This process
results in a model πSFT(Β· | Β· ; πSFT).
- Collecting Comparison Data and Reward Modeling.
This phase involves the collection of comparison data,
essential for training the RM πRM(Β·|Β·). The process starts with the model πSFT(y | x), which generates response pairs (y1, y2) from given prompts x. Human
annotators are then tasked with selecting their preferred
response from each pair, denoted as yπ€ β» yπ | x,
where yπ€ and yπ denotes the preferred and dispreferred
answer amongst (y1, y2).
- *Policy Optimization via RL.* The final step is optimizing the LLM via RL, guided by the reward model
| 26 β 4 = | ππ |
|------------|-------|
| 15 β 3 = | ππ |
| 26 β 8 = | ππ |
πRM(Β·|Β·). The process of LLMs generating responses from prompts is modeled as a bandit setting (Ouyang et al., 2022), where a reward is obtained from the reward model πRM(Β·|Β·) at the end of each response. The primary objective of RL is to adjust the parameters πLM
of the LLM so that the expected reward on the training prompt distribution PX is maximized. That is,
$\theta_{\rm LM}=\arg\max\limits_{\theta}\ {\rm E}_{\rm z-\beta x},y\sim p_{\rm LM}(\cdot|x\ ;\ \theta)\ [r_{ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
76cd4de7-b6f8-4ebe-a0de-53a42c8d0330 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 3 Preliminaries
2022), where a reward is obtained from the reward model πRM(Β·|Β·) at the end of each response. The primary objective of RL is to adjust the parameters πLM
of the LLM so that the expected reward on the training prompt distribution PX is maximized. That is,
$\theta_{\rm LM}=\arg\max\limits_{\theta}\ {\rm E}_{\rm z-\beta x},y\sim p_{\rm LM}(\cdot|x\ ;\ \theta)\ [r_{\rm RM}\ (y\ |\ x)]$
Chain-based and Tree-based Information Structures In the reward modeling stage of RLHF, we define information structures to be the structures of the information flow that generates the RM πRM(Β·) from the idealized human text distribution πH(Β·) (Section 4). Concretely speaking, in the present study, we focus on the combinatorial structure of the human preference dataset, as a key aspect of the more broadly-defined information structure. Given a prompt π₯, the generation process of the chain-based preference dataset involves independently sampling pairs of responses for comparison to form the human preference dataset. On the other hand, the generation process of the tree-based preference dataset involves sampling a complete tree of responses to prompt π₯, where each node contains only one sentence and each non-leaf node has the same number of child nodes. The tree-based preference dataset is created by randomly selecting any two complete responses from the root node to some leaf node, and then using the response pair for comparison. Figure 4 gives an illustration of the two processes. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
64414432-d47c-4d20-a222-c527149b29a8 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 4 Formulating The Rlhf Process
Due to our focus on the combinatorial structure of preference data as opposed to the distribution of prompts, we will offer a formulation for RLHF in the context of any fixed prompt π₯ β X for simplicity. This approach can be seamlessly adapted to accommodate scenarios with varying prompts.
We consider the RLHF pipeline to consist of the following key elements in their order of appearance. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2c4fdd36-635f-47f5-82cb-731542e690eb | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
It represents the probabilities of every possible response from an idealized human being whose behavior is in perfect alignment with collective human preferences. Note that the question of how we can determine this distribution is not within the scope of this paper, since our analysis does not rely on the specifics of this distribution. Based on a straightforward generalization of the Bradley-
Terry model (Bradley & Terry, 1952), we can further define the *idealized human reward function* πH : Y β R satisfying
πH(π¦0) = exp (π½πH(π¦0)) οΏ½ π¦βY exp (π½πH(π¦)) π=1.
Human Preference Dataset
π· =
οΏ½
(π¦A
π·,π, π¦B
π·,π, πΏπ·,π)
οΏ½|π·|
Here, all π¦A
π·,π, π¦B
π·,π are elements of Y drawn in specific
ways (depending on the information structure used, which
we will specify in Section 5),2 and given π¦A
π·,π, π¦B
π·,π, we have
log πH(π¦A π·,π) π½ πΏπ·,π βΌ Logistic οΏ½ οΏ½ πH(π¦B π·,π) , 1 π½ = Logistic οΏ½ πH(π¦A π·,π) β πH(π¦B π·,π), 1 οΏ½
where Logistic(*π, π *) stands for a logistic distribution with mean π and scale π , and the random variable πΏπ·,π stands for the score difference between π¦A
π·,π and π¦B
π·,π as estimated | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
734802dc-b6a5-47e9-b11b-6bbdbfbef2a6 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
= Logistic οΏ½ πH(π¦A π·,π) β πH(π¦B π·,π), 1 οΏ½
where Logistic(*π, π *) stands for a logistic distribution with mean π and scale π , and the random variable πΏπ·,π stands for the score difference between π¦A
π·,π and π¦B
π·,π as estimated by a human evaluator. The randomness here is due to the widespread presence of noise in human evaluation data.
The fact that $\delta_{D,i}$ follows such a logistic distribution is, again, a corollary of the Bradley-Terry model (**Bradley & Terry, 1952**), since it is the only distribution that satisfies
$$\mathrm{P}\left[\delta_{D,i}>0\right]=\frac{\exp\left(\beta r_{\mathrm{H}}(y_{D,i}^{\mathrm{A}})\right)}{\exp\left(\beta r_{\mathrm{H}}(y_{D,i}^{\mathrm{A}})\right)+\exp\left(\beta r_{\mathrm{H}}(y_{D,i}^{\mathrm{B}})\right)}$$
regardless of the values that $r_{\mathrm{H}}(y_{D,i}^{\mathrm{A}}),r_{\mathrm{H}}(y_{D,i}^{\mathrm{B}})$ take.
In practice, the strength of human preference is usually collected as discrete integer values or even binary labels,
1By default, we will represent a probability distribution with its probability density function (PDF) or probability mass function
(PMF), and will denote with Ξ [π] the space of all PDFs or PMFs over π (*i.e.*, all distributions over π), depending on whether π is a set of discrete elements or not.
2Below, we will not distinguish between π¦β
π· | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e2a01806-80ac-4b8f-9c44-a6a0b5f778d8 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
take.
In practice, the strength of human preference is usually collected as discrete integer values or even binary labels,
1By default, we will represent a probability distribution with its probability density function (PDF) or probability mass function
(PMF), and will denote with Ξ [π] the space of all PDFs or PMFs over π (*i.e.*, all distributions over π), depending on whether π is a set of discrete elements or not.
2Below, we will not distinguish between π¦β
π·,π as elements of Y
and as random variables taking values in Y. The meaning should be clear from the context. We will also adopt this convention for other similar variables.
which can be seen as discretized πΏπ·,π. In any given case, the finer-grained this discretization process is, the more applicable our model will be.
Reward Model
πRM(Β·). The reward model can be seen as a finite-sample estimator of πH based on π·. It is a functionvalued random variable that takes values in RY and depends on π·. It follows the distribution ππRM β Ξ
οΏ½
RYοΏ½. We can equivalently view πRM(Β·) as a mapping that maps every
π¦ β Y to a real-valued random variable, and ππRM as the joint distribution of those random variables.
One could obtain πRM using Bayesian inference on πH,3
$P_{\rm FH}(y_{D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$)
$P_{\rm FH}(y_{D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$)
R ποΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3bd87643-89ff-471c-8c99-ea59db39dc9c | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$)
$P_{\rm FH}(y_{D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$)
R ππH(π¦A
π·,π)|π’0(π£) Β· π πΏπ·,π |π’0,π£(π0)dπ£
β«
R π πΏπ·,π |π’0,π£(π0)dπ£
=
π½ exp (π½(π£0 β π’0 β π0))
[1 + exp (π½(π£0 β π’0 β π0))]2
=
π πΏπ·,π |π’0,π£0 (π0)
β«
assuming a uniform prior ππH(π¦A
π·,π) |πH(π¦B
π·,π)=π’0(Β·).4
Therefore, we have obtained the posterior distribution after
observing one single sample (π¦A
π·,π, π¦B
π·,π, πΏπ·,π),
$$r_{\rm H}(y_{D,i}^{\rm A})\mid r_{\rm H}(y_{D,i}^{\rm B}),\delta_{D,i}$$ $$\sim\ {\rm Logistic}\left(r_{\rm H}(y_{D,i}^{\rm | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
47d6c7b5-8e13-4506-9d30-f80d6066ae7b | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
posterior distribution after
observing one single sample (π¦A
π·,π, π¦B
π·,π, πΏπ·,π),
$$r_{\rm H}(y_{D,i}^{\rm A})\mid r_{\rm H}(y_{D,i}^{\rm B}),\delta_{D,i}$$ $$\sim\ {\rm Logistic}\left(r_{\rm H}(y_{D,i}^{\rm B})+\delta_{D,i},\frac{1}{\beta}\right)\tag{1}$$
pairs. We will take this step in Section 5.
Note that this relationship is not sufficient for constructing the entire function πRM, since the inference above is only at the level of response pairs, while a full-fledged inference process should work at the model level, taking into account the interdependence between different οΏ½πH(π¦A
π·,π), πH(π¦B
π·,π)οΏ½
Language Model
πLM(Β·). The language model is RLHF-
tuned from the post-SFT model based on rewards from πRM.
We characterize it as a function-valued random variable that takes values in Ξ [Y] and depends on πRM. We can equivalently view πLM(Β·) as a mapping that maps every
π¦ β Y to a real-valued random variable πLM(π¦),5 and it holds that οΏ½
π¦ πLM(π¦) β‘ 1.
Figure 2 gives a visualization of the full framework. We consider the process πH(Β·) β πH(Β·) β π πΏ|π¦π΄,π¦π΅ (Β·) to be
3When writing conditional probabilities, we may abbreviate the
condition πH(π¦B
π·,π) = π’0 with π’0, and likewise for οΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1ff405ee-bc30-4a21-b613-209ac2b740b3 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
πLM(π¦) β‘ 1.
Figure 2 gives a visualization of the full framework. We consider the process πH(Β·) β πH(Β·) β π πΏ|π¦π΄,π¦π΅ (Β·) to be
3When writing conditional probabilities, we may abbreviate the
condition πH(π¦B
π·,π) = π’0 with π’0, and likewise for πH(π¦A
π·,π) = π£0
and πΏπ·,π = π0.
4To be exact, here ππH(π¦A
π·,π) |πH(π¦B
π·,π)=π’0 (Β·) is uniform on
[βπΏ, πΏ] for a large πΏ β R+, and the derivation above concerns the
limit at πΏ β +β.
5These random variables are not mutually independent.
inherent in the generation of human preference data. Our learning process π· = {(π¦π΄, π¦π΅, πΏ)} β πRM(π¦) β πLM(π¦), on the other hand, is a mirror image of the preference generation process. πRM(Β·) can be seen as a finite-sample Bayes estimator of πH(Β·), while πLM(Β·) can be viewed as an approximation of πH(Β·). We demonstrate this correspondence with the following convergence theorem.
Theorem 4.1. *If the reward modeling process (*i.e., the encoding process) satisfies that
$\begin{array}{c}\mbox{lim sup Var}\left[r_{\rm RM}(y_{1})\mid r_{\rm RM}(y_{2})\right]=0\\ \mbox{$|D|\rightarrow | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
076bc424-0bf5-4938-8d72-f55ebf081e1d | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
πH(Β·), while πLM(Β·) can be viewed as an approximation of πH(Β·). We demonstrate this correspondence with the following convergence theorem.
Theorem 4.1. *If the reward modeling process (*i.e., the encoding process) satisfies that
$\begin{array}{c}\mbox{lim sup Var}\left[r_{\rm RM}(y_{1})\mid r_{\rm RM}(y_{2})\right]=0\\ \mbox{$|D|\rightarrow+\infty$}\;y_{1},y_{2}\in y\end{array}$
and the policy optimization process (i.e., the decoding process) performs π½-entropy-regularized RL, or, in other words,
$$\begin{array}{l}\mbox{E}_{y\sim p_{\rm LM}}\left[r_{\rm RM}(y)\right]+\beta{\rm H}_{y\sim p_{\rm LM}}\left[y\right]\\ =\sup_{p_{\rm LM}^{\prime}\in\Delta[\,y]}\left({\rm E}_{y\sim p_{\rm LM}^{\prime}}\left[r_{\rm RM}(y)\right]+\beta{\rm H}_{y\sim p_{\rm LM}^{\prime}}\left[y\right]\right)\end{array}\tag{2}$$
then, when the dataset size |π·| β +β,
πRM(π¦1) β πRM(π¦2)
πβ πH(π¦1) β πH(π¦2)
πLM(π¦)
πβ πH(π¦)
uniformly for all (π¦1, π¦2) β Y2 and for all π¦ β Y.
Proof Sketch. The convergence-in-probability of πRM can
be proven using the independence between πRM | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
146086a4-df1e-4273-bb3a-0a367285a0d7 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Idealized Human Text Distribution πh : Y β Rβ₯0.1
π¦2)
πβ πH(π¦1) β πH(π¦2)
πLM(π¦)
πβ πH(π¦)
uniformly for all (π¦1, π¦2) β Y2 and for all π¦ β Y.
Proof Sketch. The convergence-in-probability of πRM can
be proven using the independence between πRM(π¦2) and
πRM(π¦1) β πRM(π¦2) (Lemma A.10) and then applying tail
inequalities. See Proposition A.23 for a more detailed proof.
The convergence-in-distribution of πLM can be proven by
deriving the solution for (2) and then analyzing error propa-
gation. See Proposition A.24 for a more detailed proof.
β‘ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
38d179c3-6d3b-4418-9ce5-b4ff34f5dcd4 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5 Analysis Of Information Structures In Reward Modeling
In this section, we continue to work within the framework proposed in Section 4, and zoom in on the encoding stage with a focus on information structures. For the simplicity of notation, we will use π
π·
π¦ as an abbreviation for the random variable πRM(π¦) under the human preference dataset π·.
We provide a formal model of information structure and its impact on reward modeling. Using this model, we go on to analyze chain-based and tree-based information structures as case studies. Due to space constraints, we will selectively present key definitions, assumptions, and theorems. Please refer to Appendix A for the complete derivations. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
21084da9-d659-4752-8d1a-524c18b57d9f | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
We start by giving a model of inductive biases in a pretrained language model, since such a model serves as the starting point of reward model training. This will allow us to provide more realistic bounds on the generalization error of the reward model training process.
Definition 5.1 (Hypothesis Distribution). Given response set Y, the hypothesis distribution PHypothesis is a probability distribution over space RY. Here, PHypothesis stands for the distribution of the reward function which can be obtained by finetuning the pretrained language models.
Definition 5.2 (Inductive Bias Edge Set). Given response set Y and hypothesis distribution PHypothesis(Β·), the inductive bias edge set πΈIB is defined as follows.
οΏ½edge οΏ½π¦π, π¦ π, πΏπ, π
οΏ½ β πΈIB
οΏ½
ββ
οΏ½
πΌββΌPHypothesis [β(π¦1), β(π¦2)] > πΆ
οΏ½
for βπ¦π, π¦ π, π β π, π, π β {1, 2, ..., |Y|}. πΆ is a constant that
provides a lower bound on the mutual information of any
edge in πΈIB over distribution PHypothesis.
π=1, we define π·'s induced Bayesian net-
We define the inductive bias edge set πΈIB to characterize the
a priori correlations between elements in Y before obtain-
ing human rewards. The relevance may stem from factors
such as semantic similarity among elements in Y, since a
pretrained language model possesses internal representa-
tions of semantic features.
Definition 5.3 (Induced Bayesian Network). Given re-
sponse set Y and any human preference dataset π· =
οΏ½
(π¦A
π·,π, π¦B
π·,π, πΏπ·,π)
οΏ½|οΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dcb4240f-f038-41aa-9851-5d149712b6eb | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
a priori correlations between elements in Y before obtain-
ing human rewards. The relevance may stem from factors
such as semantic similarity among elements in Y, since a
pretrained language model possesses internal representa-
tions of semantic features.
Definition 5.3 (Induced Bayesian Network). Given re-
sponse set Y and any human preference dataset π· =
οΏ½
(π¦A
π·,π, π¦B
π·,π, πΏπ·,π)
οΏ½|π·|
work (IBN) πΊπ·(Y, πΈπ·) as a graph with vertex set Y and
edge set πΈπ· = πΈIB βͺ πΈπ·
HP. The human preference edge set
πΈπ·
HP is defined as
πΈπ· HP = οΏ½ (π’π· π , π£π· π , π π· π ) : π = 1 *. . .* 2|π·| οΏ½
where the π-th edge connects π’π·
π with π£π·
π and contains information π π·
π . Here,
(π’π· π , π£π· π ) =  οΏ½ π¦A π·,π, π¦B π·,π οΏ½ if π = 2π β 1 οΏ½ π¦B π·,π, π¦A π·,π οΏ½ if π = 2π  and π π· π (Β·|Β·) = ππ
π· π£π· π |π
π· π’π· | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1dbcb1e4-ac5e-4735-b128-f9aa06d0847c | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
π¦B π·,π οΏ½ if π = 2π β 1 οΏ½ π¦B π·,π, π¦A π·,π οΏ½ if π = 2π  and π π· π (Β·|Β·) = ππ
π· π£π· π |π
π· π’π· π (Β·|Β·)
is a conditional distribution determined by πΏπ·,β πβ.
Here, specifying the conditional distributions instead of joint distributions avoids issues caused by the shift-invariance of reward scores. In the induced Bayesian network that we define, the edges between any two points are bidirectional. Therefore, in the subsequent sections, we generally consider the induced Bayesian network as an undirected graph without loss of generality.
Assumption 5.4 (Information of an Edge Induces a Logistic Distribution). Given any dataset π· and induced Bayesian network πΊπ·(Y, πΈπ·), we assume that whether the edge from π¦1 to π¦2 belongs to πΈIB or πΈπ·
HP, the information π π· =
ππ
π·
π¦2 |π
π·
π¦1 (Β·|Β·) is the probability density function of a logistic distribution, which means
π½(π¦1,π¦2) οΏ½ if (π¦1, π¦2) β πΈIB Logistic οΏ½ π, 1 π½HP π
π· π¦2 | π
π· π¦1 = π ⼠ Logistic οΏ½ π + πΏ, 1 οΏ½ if (π¦1, π¦2) β πΈπ· | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d0171d96-1d17-473c-871a-6adc295eb444 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
οΏ½οΏ½(π¦1,π¦2) οΏ½ if (π¦1, π¦2) β πΈIB Logistic οΏ½ π, 1 π½HP π
π· π¦2 | π
π· π¦1 = π ⼠ Logistic οΏ½ π + πΏ, 1 οΏ½ if (π¦1, π¦2) β πΈπ· HP 
where π½(π¦1,π¦2) is a constant related to (π¦1, π¦2), π½HP is a constant related to πΈπ·
HP and πΏ is related to (π¦1, π¦2), which represents human preference between π¦1 and π¦2. Here we assume that human preferences exhibit a certain degree of stability, which means that for any (π¦1, π¦2) β πΈπ·
HP, π½HP has upper and lower bounds. Since our analysis only concerns the asymptotic order of statistical quantities, we can assume without loss of generality that for any (π¦1, π¦2) β πΈπ·
HP, constant π½HP is independent of πΈπ·
HP.
Note that the claim of π
π·
π¦2|π
π·
π¦1 = π following a logistic distribution when (π¦1, π¦2) β πΈπ·
HP is provided with support in
(1) as a corollary of the Bradley-Terry model (Bradley &
Terry, 1952).
Definition 5.5 (Inference Path). Given any dataset π· and
π¦1 β Y, π¦2 β Y, we call a sequence of edges π =
{(π π, π‘π, | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
31fac164-dc9e-4b3e-8ec4-1abd08362933 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
π following a logistic distribution when (π¦1, π¦2) β πΈπ·
HP is provided with support in
(1) as a corollary of the Bradley-Terry model (Bradley &
Terry, 1952).
Definition 5.5 (Inference Path). Given any dataset π· and
π¦1 β Y, π¦2 β Y, we call a sequence of edges π =
{(π π, π‘π, ππ) β πΈπ· : π = 1 . . . π} an *inference path* from π¦1
to π¦2 if π¦1 = π 1, π‘π = π¦2, and π π = π‘π+1, β*π < π*. Assuming the independence between π
π·
π π and π
π·
π‘π+1 conditional on π
π·
π π+1
(Assumption 5.9), one can uniquely determine the conditional distribution ππ
π¦2 |π
π¦1 (Β·|Β·) based on {ππ : π = 1 *. . . π*}, which we denote with ππ(Β·|Β·).
There could be multiple possible inference paths between any pair of vertices. To choose the best one among them, we need to define the *inference variance*.
Definition 5.6 (Inference Distance). Given any inference path π in πΊπ· going from π¦1 β Y to π¦2 β Y, its inference variance IV[π] is defined as Var
οΏ½
π
π·
π¦2
οΏ½οΏ½π
π·
π¦1
οΏ½
. The optimal inference path in πΊπ· between οΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ffc19e88-74ea-43d1-bcd0-695121927e8f | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
vertices. To choose the best one among them, we need to define the *inference variance*.
Definition 5.6 (Inference Distance). Given any inference path π in πΊπ· going from π¦1 β Y to π¦2 β Y, its inference variance IV[π] is defined as Var
οΏ½
π
π·
π¦2
οΏ½οΏ½π
π·
π¦1
οΏ½
. The optimal inference path in πΊπ· between π¦1 and π¦2, denoted by
ππ·
opt(π¦1, π¦2), is the inference path with the smallest inference variance. The inference distance ππ·(π¦1, π¦2) between
π¦1 and π¦2 is defined as IV[ππ·
opt(π¦1, π¦2)]. Similarly, we define πIB(π¦1, π¦2) to be the minimum inference variance of paths leading from π¦1 to π¦2 that only traverse edges in πΈIB.
Here, the inference variance IV[π] and the inference distance ππ·(π¦1, π¦2) measures the uncertainty over the value of
π
π·
π¦2 if one starts from the value of π
π·
π¦1 and follows the inference path π. They reflect our ability to determine the relative human preference between π¦1 and π¦2 based on information in π·.
Definition 5.7 (Mean Inference Distance). The mean inference distance of a human preference dataset π· is defined structures, different structural functions, and different variance regimes. Each cell contains the mean inference distance under that setting. The variance regime π denotes the case when the variances of πΈIB paths are lower-bounded by a constant, and the variance regime β¬ denotes the case when the variances of πΈIB paths become π(1). Observe | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9d11e1ae-174e-4417-92c0-49eb48f52f97 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
1 and π¦2 based on information in π·.
Definition 5.7 (Mean Inference Distance). The mean inference distance of a human preference dataset π· is defined structures, different structural functions, and different variance regimes. Each cell contains the mean inference distance under that setting. The variance regime π denotes the case when the variances of πΈIB paths are lower-bounded by a constant, and the variance regime β¬ denotes the case when the variances of πΈIB paths become π(1). Observe that in case π of F βΌ πΌ Β· πβπΌ (the structural function F is defined in Definition 5.10), tree-based information structure outperforms chain-based information structure by a factor of
(log |π·|)1βπΌ (log log |π·|)β1 = π(1), while in case β¬ the the latter information structure outperforms the former by (log |π·|)2πΌ/(2+πΌ) =
π(1). In all other cases, the two have asymptotically equivalent performance. This suggests that the comparative advantage of tree-based information structure is learning in highly diverse contexts (*i.e.*, F βΌ πΌ Β· πβπΌ) from limited human preference data (*i.e.*, case π).
| Chain-based RM | Tree-based RM |
|----------------------|-----------------|
| π | |
| (Large Var.) | |
| β¬ | |
| (Infinitesimal Var.) | |
| π | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cce63aa1-f407-4db2-b687-72d21d2ecb7a | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
|
| (Large Var.) | |
| β¬ | |
| (Infinitesimal Var.) | |
| π | |
| (Large Var.) | |
| β¬ | |
| (Infinitesimal Var.) | |
| 2 | |
| + | |
| πΌ | |
| | | |
| π· | |
| | | |
| οΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8083df33-06be-418c-9bd0-aaed9cc2c51b | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
|
| | | |
| π· | |
| | | |
| πΌ | |
| F βΌ | |
| πΌ | |
| Β· | |
| π | |
| β | |
| πΌ | |
| π | |
| οΏ½ | |
| πΌ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
422cb340-a149-478a-9098-cd8c33e546fc | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
| |
| π | |
| οΏ½ | |
| πΌ | |
| Β·( | |
| log | |
| | | |
| π· | |
| |) | |
| 1 | |
| + | |
| πΌ | |
| | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c13408eb-5bba-4d42-92ef-53995807b9d9 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
|
| + | |
| πΌ | |
| | | |
| π· | |
| | | |
| | πΌ |
| | |
| log log | |
| | | |
| π· | |
| | | |
| οΏ½ | |
| π | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cc426764-c0db-4f1a-a1eb-df69f707d11e | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
οΏ½ | |
| | | |
| οΏ½ | |
| π | |
| οΏ½ | |
| πΌ | |
| 2 | |
$\mathcal{F}\sim I\cdot\left(\log M\right)^{-\alpha}$$O\left(I\cdot\left(\log\left|D\right|\right)^{-\alpha}\right)$$O\left(I\cdot\left(\log\left|D\right|\right)^{-\alpha}\right)$$\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\alpha}\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b1b272a7-3a21-4793-8269-3b1200eca090 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
\omega\left(\left(\log M\right)^{-\alpha}\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$
by Eπ¦1,π¦2βY
οΏ½
ππ·(π¦1, π¦2)
οΏ½ , where π¦1, π¦2 are independently
and equiprobably drawn.
Remark 5.8 (RM Inference and IBN Inference are Analo-
gous). When the training of the RM on π· has converged,
every sample in π· (i.e., every edge in πΈπ·
HP) serves as a
soft constraint on πΆ's relative preference between the two
compared passages, since any sample preference that is vio-
lated will create gradients that pull away from convergence.
Therefore, the RM policy that is converged upon represents
the joint satisfaction of these soft constraints, which enables
the RM to perform the equivalent of multi-hop inference on
πΊπ·. Thus, we consider an RM trained on dataset π· to be
approximately equivalent to an optimal inference machine
on the IBN πΊπ·, which allows us to use the mean inference
distance as the quality criteria for datasets.
From now on, we will use the mean inference distance as
the criteria for evaluating a dataset's quality. Also note that
the inference variance focuses on the relative preference
between two vertices, which avoids the problem of shift-
invariant reward ratings.
**Assumption 5.9** (Conditional Independence). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
db42becb-6078-4e6c-8db8-864198aed58b | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
οΏ½οΏ½. Thus, we consider an RM trained on dataset π· to be
approximately equivalent to an optimal inference machine
on the IBN πΊπ·, which allows us to use the mean inference
distance as the quality criteria for datasets.
From now on, we will use the mean inference distance as
the criteria for evaluating a dataset's quality. Also note that
the inference variance focuses on the relative preference
between two vertices, which avoids the problem of shift-
invariant reward ratings.
**Assumption 5.9** (Conditional Independence).: Given any induced Bayesian network $G^{D}$ and any $y_{1},y_{2}\in\mathcal{J}$, the optimal inference path from $y_{1}$ to $y_{2}\;S^{D}_{\text{opt}}(y_{1},y_{2})$ satisfies the following properties.
$$p\left(R^{D}_{y_{1}},R^{D}_{y_{2}}\big{|}R^{D}_{s_{i}}\right)=p\left(R^{D}_{y_{1}}\big{|}R^{D}_{s_{i}}\right)\cdot p\left(R^{D}_{y_{2}}\big{|}R^{D}_{s_{i}}\right)$$
for $\forall s_{i}$, where $s_{i}$ is a node in optimal inference path $S^{D}_{\text{opt}}(y_{1},y_{2})$.
Note that this assumption is stronger than typical conditional independence assumptions, in that it ignores correlations caused by non-optimal paths, which have a smaller influ-
2+πΌ (log |π·|)
2πΌ
2+πΌ
2+πΌ
2+πΌ
|π·|
πΌ
οΏ½
|π·| πΌ
οΏ½
π
οΏ½
πΌ
2
οΏ½ π οΏ½ πΌΒ·(log |π·|)2πΌ (log |π·|) π οΏ½οΏ½οΏ½
ence on the inference result | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
36dcb450-8a7b-40da-b560-196d364aa4e4 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.1 The Ibn Formulation
, which have a smaller influ-
2+πΌ (log |π·|)
2πΌ
2+πΌ
2+πΌ
2+πΌ
|π·|
πΌ
οΏ½
|π·| πΌ
οΏ½
π
οΏ½
πΌ
2
οΏ½ π οΏ½ πΌΒ·(log |π·|)2πΌ (log |π·|) π οΏ½οΏ½οΏ½
ence on the inference result. It should be viewed as an approximation. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e345a555-d4dd-49d7-a152-d5f7d026c759 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.2 Analysis Of Two Information Structures
Definition 5.10 (Structural Function). Given any π β Z+, let F (π) be the smallest π β R+ such that there exists a partition C1, Β· Β· Β· , Cπ (Cπ β Y) of Y satisfying6
Eπ¦1,π¦2βCπ [πIB(π¦1, π¦2)] β€ π, βπ
$$\frac{1}{2M}\leq\frac{|C_{i}|}{|\mathcal{Y}|}\leq\frac{2}{M},\quad\forall1\leq i\leq M$$
We will call F the *structural function*, since its asymptotic behavior reveals structural properties of πΈIB.
Remark 5.11 (Intuition on the Structural Function). The asymptotic behavior of F can be understood as a measure of the degree of isolation and decentralization in the graph
πΊβ²(Y, πΈIB). Extremely dense graphs or centralized graphs, such as a clique or a star graph, possess an asymptotically constant F . Extremely decentralized graphs, such as a long chain, have F (π) = Ξ οΏ½πβ1οΏ½. Therefore, when F (π) βΌ
πΌ Β· π(π), we interpret πΌ and the asymptotic behavior of π as measures of the diversity and complexity of the language modeling task at hand, since they characterize isolation and decentralization in the output space Y. Figure 3 provides an example of the C1, Β· Β· Β· , Cπ partition on an IBN. The inference path illustrated possesses a typical structure that is key to our analysis, where πΈIB edges
6Recall that a partition is a series of non-intersecting subsets whose union equals the full set.
constitute the intra-cluster trips, and πΈHP edges perform the inter-cluster leaps. Refer to Appendix A for details. Finally, we present the results for the chain-based and treebased information structures. A dataset of chain-based structure is | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e50d78c6-7ed0-40a8-9be3-6aad82d9b5f1 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.2 Analysis Of Two Information Structures
provides an example of the C1, Β· Β· Β· , Cπ partition on an IBN. The inference path illustrated possesses a typical structure that is key to our analysis, where πΈIB edges
6Recall that a partition is a series of non-intersecting subsets whose union equals the full set.
constitute the intra-cluster trips, and πΈHP edges perform the inter-cluster leaps. Refer to Appendix A for details. Finally, we present the results for the chain-based and treebased information structures. A dataset of chain-based structure is simply modeled as οΏ½π¦π΄, π¦π΅οΏ½ pairs sampled independently and uniformly at random from Y2. Our modeling scheme for tree-based datasets is more complicated and can be found in Assumption A.18.
We will denote by π the case when the variances of πΈIB
paths are lower-bounded by a constant, and denote by β¬
the case when the variances of πΈIB paths become π(1).
Theorem 5.12 (Mean Inference Distance of Chain-based
Dataset). For any chain-based dataset π· = π·chain, with
probability 1 β π(1) (|π·| β +β), its mean inference dis-
tance Eπ¦1,π¦2βY
οΏ½
ππ·chain (π¦1, π¦2)
οΏ½
satisfies
Eπ¦1,π¦2βY
οΏ½
ππ·chain (π¦1, π¦2)
οΏ½
|π·| πΌ log log |π·|
οΏ½
(F βΌ πΌ Β· πβπΌ, π)
π
οΏ½
πΌΒ·(log |π·|)1+πΌ
2+πΌ |π·|β
πΌ
π
οΏ½
πΌ
2
2+πΌ
οΏ½
(F | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
48aa3e14-9dcb-4409-99fa-4b6eb0ebead8 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.2 Analysis Of Two Information Structures
π¦1, π¦2)
οΏ½
|π·| πΌ log log |π·|
οΏ½
(F βΌ πΌ Β· πβπΌ, π)
π
οΏ½
πΌΒ·(log |π·|)1+πΌ
2+πΌ |π·|β
πΌ
π
οΏ½
πΌ
2
2+πΌ
οΏ½
(F βΌ πΌ Β· πβπΌ, β¬)
=
π (πΌ Β· (log |π·|)βπΌ)
(F βΌ πΌ Β· (log π)βπΌ , π or β¬)

$O\left(I\cdot\left(\log|D|\right)^{-\epsilon}\right)$ ($\mathcal{F}\sim I\cdot\left(\log M\right)^{-\epsilon},\mathcal{A}$ or $\mathcal{A}$) $O\left(\mathcal{F}\left(\left|D|^{\frac{1}{2}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{B}$)
 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b56d9f94-f87a-4fb7-9247-814ddb09e1f7 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.2 Analysis Of Two Information Structures
right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{B}$)

for some constant πΌ > 0, or for all constant π > 0. Note
that for F βΌ πΌ Β· πβπΌ in particular, we have πΌ < 1, since
the unrealistic extreme case of a long chain as πΈIB achieves
the asymptotically smallest F of Ξ οΏ½πΌ Β· πβ1οΏ½.
Theorem 5.13 (Mean Inference Distance of Tree-based
Dataset). For any tree-structured dataset π· = π·tree, with
probability 1 β π(1) (|π·| β +β), its mean inference dis-
tance Eπ¦1,π¦2βY
οΏ½
ππ·tree(π¦1, π¦2)
οΏ½
satisfies
Eπ¦1,π¦2βY
οΏ½
ππ·tree(π¦1, π¦2)
οΏ½
π
οΏ½
πΌΒ·(log |π·|)2πΌ
|π·| πΌ οΏ½ (F βΌ πΌ Β· πβπΌ, π) 2+πΌ (log |π·|) 2πΌ 2+πΌ 2+πΌ |π·| πΌ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e34ec56f-e73b-41a2-bce9-5596a3780c8b | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.2 Analysis Of Two Information Structures
,π¦2βY
οΏ½
ππ·tree(π¦1, π¦2)
οΏ½
π
οΏ½
πΌΒ·(log |π·|)2πΌ
|π·| πΌ οΏ½ (F βΌ πΌ Β· πβπΌ, π) 2+πΌ (log |π·|) 2πΌ 2+πΌ 2+πΌ |π·| πΌ
π
οΏ½
πΌ
2
οΏ½ (F βΌ πΌ Β· πβπΌ, β¬)
=
π (πΌ Β· (log |π·|)βπΌ)
(F βΌ πΌ Β· (log π)βπΌ , π or β¬)

$O\left(I\cdot\left(\log|D|\right)^{-\epsilon}\right)$ ($\mathcal{F}\sim I\cdot\left(\log M\right)^{-\epsilon},\mathcal{A}$ or $\mathcal{B}$) $O\left(\mathcal{F}\left(\left|D|^{\frac{1}{2}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9aac140f-3ae0-4df9-956b-0e854307dfd3 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.2 Analysis Of Two Information Structures
}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{B}$)
for some constant πΌ > 0, or for all constant π > 0.

Corollary 5.14. If the reward modeling process adopts either the chain-based or the tree-based information structure, and the policy optimization process performs π½-entropyregularized RL, then, when the dataset size |π·| β +β,
$$r_{RM}(y_{1})-r_{RM}(y_{2})\stackrel{{P}}{{\rightarrow}}r_{H}(y_{1})-r_{H}(y_{2})$$ $$p_{LM}(y)\stackrel{{d}}{{\rightarrow}}p_{H}(y)$$
_uniformly for all $(y_{1},y_{2})\in\mathcal{Y}^{2}$ and for all $y\in\mathcal{Y}$._
The results of Theorem 5.12 and Theorem 5.13 are summarized in Table 1 .
Observe that in case π of F βΌ πΌ Β· πβπΌ, tree-based information structure outperforms chain-based information structure by a factor of
(log |π·|)1βπΌ (log log |π·|)β1 = π(1), while in case β¬ the the latter information structure outperforms the former by
(log | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e7bf9a96-5488-49a3-8066-1930365541f3 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 5.2 Analysis Of Two Information Structures
\in\mathcal{Y}$._
The results of Theorem 5.12 and Theorem 5.13 are summarized in Table 1 .
Observe that in case π of F βΌ πΌ Β· πβπΌ, tree-based information structure outperforms chain-based information structure by a factor of
(log |π·|)1βπΌ (log log |π·|)β1 = π(1), while in case β¬ the the latter information structure outperforms the former by
(log |π·|)2πΌ/(2+πΌ) = π(1). In all other cases, the two have asymptotically equivalent performance. This suggests that the comparative advantage of tree-based information structure is learning in highly diverse contexts (*i.e.*, F βΌ πΌ Β· πβπΌ)
from limited human preference data (*i.e.*, case π).
To summarize Section 5, we have modeled both the information structure of the dataset and the inductive bias in RM training, by defining the IBN (Definition 5.3) and related concepts like the mean inference distance (Definition 5.7) and the structural function (Definition 5.10). Using this set of tools, we go on to prove asymptotic bounds on reward generalization in the case of chain-based (Theorem 5.12) and tree-based information structure (Theorem 5.13) respectively, as two case studies. Comparing the two, we find that the latter is better suited for learning in highly diverse contexts from limited human preference data. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fbce5cdc-499b-41ee-bde5-b825f272121e | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 6 Experiments
Section 6 answers the following question: On tasks with diverse context and limited data, is tree-based RM more efficient in encoding preferences than chain-based ones? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8d7626fd-f801-4d56-9afa-28c20d0eb7d2 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 6.1 Experiment Setup
Dynamic Tree Generation To enhance the benefits of the tree structure, we propose Dynamic Tree Generation
(DTG) for constructing question-answer (QA) datasets and preference datasets. DTG seeks to optimize QA datasets' diversity and stability within a preset maximum tree depth and limited search complexity. Refer to Appendix B.1 for detailed settings including the DTG pseudocode.
Tasks Specification We focused on three key tasks: text conversation, dialogue summarization, and mathematical problem-solving. The HH-RLHF dataset (Bai et al., 2022a) informed our text conversation analysis, while the Dialog- Sum dataset (Chen et al., 2021), with its 13,460 dialogue instances and annotated summaries, served for dialogue summarization. For mathematical problem-solving, we utilized the GSM-8K dataset (Cobbe et al., 2021), comprising
8,500 elementary math word problems.
SFT Models For the text conversation task, we utilize Alpaca-7B (Taori et al., 2023) based on the 52K conversation dataset since it has been widely recognized in dialogue scenarios. For the other tasks, we fine-tune the pre-trained model LLaMA2-7B (Touvron et al., 2023) based on the respective datasets. These serve as our initial models for further preference data sampling, reward modeling, and finetuning.
Preference Labeling For each task we constructed treestructured and chain-structured preference datasets, both composed of roughly 20K preference pairs. For each treebased pair, we concatenate the prompt and the shared portion of answers as context, guiding preference labeling to concentrate on the distinct answer segments. Regarding the chain-based ones, we performed comparisons directly based on prompts and different answers.
Evaluation Metrics To verify that the tree-based RM is a better preference encoder than the chain-based one, we finetuned the initial SFT models using two RM-based preference decoders: Proximal Policy Optimization (PPO) (Schulman et al., 2017) and Rejection Sampling Fine-Tuning
(RFT) (Touvron et al., 2023). The methodology for evaluating model performance entails a comparative analysis of the models' responses to held-out prompts, utilizing GPT-4 as the judge. For all prompts regarding our GPT- | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
80e7728d-1525-46c0-8fba-e93a31aa89b1 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 6.1 Experiment Setup
Evaluation Metrics To verify that the tree-based RM is a better preference encoder than the chain-based one, we finetuned the initial SFT models using two RM-based preference decoders: Proximal Policy Optimization (PPO) (Schulman et al., 2017) and Rejection Sampling Fine-Tuning
(RFT) (Touvron et al., 2023). The methodology for evaluating model performance entails a comparative analysis of the models' responses to held-out prompts, utilizing GPT-4 as the judge. For all prompts regarding our GPT-4 preference annotations and evaluation criteria, refer to Appendix B.4.
6.2
Analysis of Experimental Results with PPO
Abilities of Preference Encoding The tree-based RM enhances the efficiency of preference encoding. In Table 2, we demonstrate under three key tasks that: (1) Compared to the chain-based scenario, tree-based RM enables initial SFT models to achieve a higher performance improvement;
(2) Initial SFT models fine-tuned with tree-based RMs outperforms those chain-based ones in 65% cases on average. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e587bb74-84f6-415c-a27f-6e9bc0985282 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 6.3 Analysis Of Experimental Results With Rft
Abilities of Fine-grained Distinction To assess the capability of the tree-based RM in distinguishing fine-grained differences, we conduct RFT on the initial SFT model, Alpaca-7B, using different RMs. We sample π responses for each training prompt and select the highest-scoring one
(Best of π, *BoN*) evaluated by corresponding RM, following Bai et al. (2022b). This optimal response is then used for further finetuning of Alpaca-7B. We execute RFT for
π = 22, 23, Β· Β· Β· , 29.
According to Figure 5, the tree-based RM significantly outperforms the chain-based ones in enhancing Alpaca-7B, exhibiting a continuous uptrend as the sample size π grows. In contrast, the baseline RM exhibits notable insensitivity to variations in the number of sample answers.
Ablation Study on Preference Annotation Our study, using RFT, explores how different proportions of responses in preference data influence the RM's performance. Figure 5 reveals that training RMs on preference data with complete responses leads to superior outcomes. This suggests that finetuning the model's fine-grained distinction abilities can be achieved through adjustments in data generation methods, without altering annotation techniques.
Chain vs. SFT
Tree (Ours) vs. SFT
Tree (Ours) vs. Chain
Datasets
Win / Lose
Win / Lose
Win / Lose
HH-RLHF
0.72 / 0.28
0.78 / 0.22
0.74 / 0.26
GSM-8K
0.57 / 0.43
0.65 / 0.35
0.63 / 0.37
DialogueSum
0.58 / 0.42
0.66 / 0.34
0.58 / 0.42
Average
0.62 / 0.38
0.70 / 0.30
0.65 / 0.35
Data Scalability To assess the scalability of the tree-based RM with larger preference datasets, we further replicate the RFT experiments on fine-tuned LLaMA-7B with scaling dataset sizes. As Figure 6 indicates, tree-based RM demonstrates an augmented proficiency in distinguishing fine-grained differences from larger datasets, consistent with Gao et al. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3be492ac-247a-4094-a261-38a24092b137 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 6.3 Analysis Of Experimental Results With Rft
DialogueSum
0.58 / 0.42
0.66 / 0.34
0.58 / 0.42
Average
0.62 / 0.38
0.70 / 0.30
0.65 / 0.35
Data Scalability To assess the scalability of the tree-based RM with larger preference datasets, we further replicate the RFT experiments on fine-tuned LLaMA-7B with scaling dataset sizes. As Figure 6 indicates, tree-based RM demonstrates an augmented proficiency in distinguishing fine-grained differences from larger datasets, consistent with Gao et al. (2022). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fc47dc99-a654-4e69-a083-c5a87f6672cc | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## 7 Conclusion And Outlook
In this study, we conceptualize RLHF as an autoencoding process, and introduce the induced Bayesian network to analyze reward generalization in RLHF from a graph theory perspective. As a case study using this set of tools, we propose a tree-based method for reward modeling, and validate its superiority over the chain-based baseline through both theoretical and experimental means. We expect our methodology to have wider applications in the analysis of reward generalization.
Limitations & Future Work The present study has focused on the RLHF paradigm, and has restricted attention to efficiency analysis on information structures. The scope of focus can potentially be extended to cover larger areas in the alignment field, such as the scaling analysis of scalable oversight methods (Ji et al., 2023b).
Also, since the IBN method can potentially be utilized to help understand goal misgeneralization (Di Langosco et al., 2022; Shah et al., 2022), further exploration on this front is required, including drawing connections between IBN structures, out-of-distribution contexts, and goals. On the experimental side, while our research has successfully highlighted the encoding efficiency of tree-based RMs, PPO and RFT are not necessarily good decoders for treebased RM. Moving forward, future work could aim to design algorithms with enhanced decoding prowess, better leveraging the encoding potential of RMs with sophisticated information structures. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3a26fb6a-12cc-452e-bc82-1597ab430d0e | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Appendix Table Of Contents
| | | A Formulations and Proofs | 13 |
|-------------------------------------------------------|--------------------------------------------------------|-----------------------------|------|
| A.1 | Formulating Information Structures in Reward Modeling | | 13 |
| A.2 Analysis of the Chain-based Information Structure | | 14 | |
| A.3 Analysis of the Tree-based Information Structure | | 21 | |
| A.4 Analysis Under the High-Density Regime | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e60ee299-0a4f-4529-b565-03cb48de3922 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Appendix Table Of Contents
| 21 | |
| A.4 Analysis Under the High-Density Regime | | 25 | |
| A.5 | Convergence of the Reward Model and the Language Model | | 33 |
| B | Experiment Details | 34 | |
| B.1 | Dynamic Tree Generation | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
91a2381c-7ca1-41ae-a1fa-eb10dcc424d7 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Appendix Table Of Contents
| |
| B.1 | Dynamic Tree Generation | | 34 |
| B.2 | Complete vs. Incomplete Responses Annotation | | 35 |
| B.3 | Hyperparameters | | 36 |
| B.4 | GPT-4 Prompts | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
907acc21-96cd-4707-9cc4-684bf07ab26f | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## Appendix Table Of Contents
|
| B.4 | GPT-4 Prompts | | 38 |
| B.5 | Case Study | | 40 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
39aa8e98-c3e8-4806-9575-b2a805420423 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling
Definition A.1 (Hypothesis Distribution). Given a response set Y, the hypothesis distribution PHypothesis is a probability distribution over space RY. Here, PHypothesis stands for the distribution of the reward function which can be expressed by the pre-trained language models.
**Definition A.2** (Inductive Bias Edge Set).: Given a response set $\mathcal{Y}$ and hypothesis distribution $\mathcal{P}_{\mathrm{Hypothesis}}(\cdot)$, the inductive bias edge set $E_{\mathrm{B}}$ is defined as follows.
$$\deg\ \left(\gamma_{i},\gamma_{j},\delta_{i,j}\right)\in E_{\mathrm{B}}\ \Longleftrightarrow\ I_{\mathrm{A-P_{\mathrm{Hypothesis}}}}\left[h(\gamma_{1}),h(\gamma_{2})\right]>C\tag{3}$$
for $\forall\gamma_{i},\gamma_{j},\ i\neq j,\ i,j\in\{1,2,\ldots,|\mathcal{Y}|\}$. $C$ is a constant which provides a lower bound on the mutual information of any edge in $E_{\mathrm{B}}$ over distribution $\mathcal{P}_{\mathrm{Hypothesis}}$.
We define the inductive bias edge set πΈIB to characterize the relevance of elements in Y before obtaining human rewards.
The relevance may stem from factors such as semantic similarity among elements in Y.
π=1, we define π·'s *induced Bayesian network* (IBN) πΊπ·(Y, πΈπ·) as a graph with vertex set Y and edge
Definition A.3 (Induced Bayesian Network). Given a response set Y and any human preference dataset π·
=
οΏ½
(π¦A
π·,π, π¦B
π·,π, πΏπ·,π)
οΏ½|π·|
set οΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e74f5c7b-14a8-4faf-90b6-b53757d28ebc | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling
π·'s *induced Bayesian network* (IBN) πΊπ·(Y, πΈπ·) as a graph with vertex set Y and edge
Definition A.3 (Induced Bayesian Network). Given a response set Y and any human preference dataset π·
=
οΏ½
(π¦A
π·,π, π¦B
π·,π, πΏπ·,π)
οΏ½|π·|
set πΈπ· = πΈIB βͺ πΈπ·
HP. The human preference edge set πΈπ·
HP is defined as
πΈπ· HP = οΏ½ (π’π· π , π£π· π , π π· π ) : π = 1 *. . .* 2|π·| οΏ½
where the π-th edge connects π’π·
π with π£π·
π and contains information π π·
π . Here,
(π’π· π , π£π· π ) =  οΏ½ π¦A π·,π, π¦B π·,π οΏ½ if π = 2π β 1 οΏ½ π¦B π·,π, π¦A π·,π οΏ½ if π = 2π  and π π· π (Β·|Β·) = ππ
π· π£π· π |π
π· π’π· π (Β·|Β·)
is a conditional distribution determined by πΏπ·,β πβ.
Here, specifying the | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
abe3287e-eab3-4412-b810-e84ff6e86846 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling
π¦A π·,π οΏ½ if π = 2π  and π π· π (Β·|Β·) = ππ
π· π£π· π |π
π· π’π· π (Β·|Β·)
is a conditional distribution determined by πΏπ·,β πβ.
Here, specifying the conditional distributions instead of joint distributions avoids issues caused by the shift-invariance of reward scores. In the induced Bayesian network that we define, the edges between any two points are bidirectional. In other words, when defining an edge from π¦1 to π¦2, we also define an edge from π¦2 to π¦1, and the meanings of the weights on these two edges are equivalent. Therefore, in the subsequent sections, for the sake of simplification, we generally consider the induced Bayesian network as an undirected graph without loss of generality.
Assumption A.4 (The Information of an Edge Follows a Logistic Distribution). Given any dataset π· and induced
Bayesian network πΊπ·(Y, πΈπ·), we assume that whether the edge from π¦1 to π¦2 belongs to πΈIB or πΈπ·
HP, the information
π π· = ππ
π·
π¦2 |π
π·
π¦1 (Β·|Β·) is the probability density function of a logistic distribution, which means
π½(π¦1,π¦2) οΏ½ if (π¦1, π¦2) β πΈIB Logistic οΏ½ π, 1 π½HP π
π· π¦2|π
π· π¦1 = π ⼠ Logistic οΏ½ π + πΏ, 1 οΏ½ if (οΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ccdbda47-8d8d-401f-8988-fa09962ba2bf | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling
Β·) is the probability density function of a logistic distribution, which means
π½(π¦1,π¦2) οΏ½ if (π¦1, π¦2) β πΈIB Logistic οΏ½ π, 1 π½HP π
π· π¦2|π
π· π¦1 = π ⼠ Logistic οΏ½ π + πΏ, 1 οΏ½ if (π¦1, π¦2) β πΈπ· HP (4) 
where π½(π¦1,π¦2) is a constant related to (π¦1, π¦2), π½HP is a constant related to πΈπ·
HP and πΏ is related to (π¦1, π¦2), which represents
human preference between π¦1 and π¦2. Here we assume that human preferences exhibit a certain degree of stability, which
means that for any (π¦1, π¦2) β πΈπ·
HP, π½HP has upper and lower bounds. Thus, without loss of generality, we assume that for
any (π¦1, π¦2) β πΈπ·
HP, constant π½HP is independent of πΈπ·
HP.
Definition A.5 (Inference Path). Given any dataset π· and π¦1 β Y, π¦2 β Y, we call a sequence of edges π = {(π π, π‘π, ππ) β
πΈπ· : π = 1 . . . π} an inference path from π¦1 to π¦2 if π¦1 = π 1, π‘π = π¦2, and π π | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fdb4c0e3-755b-4577-8932-5c3f844a61d8 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling
π¦1 β Y, π¦2 β Y, we call a sequence of edges π = {(π π, π‘π, ππ) β
πΈπ· : π = 1 . . . π} an inference path from π¦1 to π¦2 if π¦1 = π 1, π‘π = π¦2, and π π = π‘π+1, βπ < π. Assuming the independence
between π
π·
π π and π
π·
π‘π+1 conditional on π
π·
π π+1, one can uniquely determine the conditional distribution ππ
π¦2 |π
π¦1 (Β·|Β·) based on
{ππ : π = 1 . . . π}, which we denote with ππ(Β·|Β·).
There could be multiple possible inference paths between any pair of vertices. To choose the best one among them, we need
to define the inference variance of any inference path.
Definition A.6 (Inference Distance). Given any inference path π in πΊπ· going from π¦1 β Y to π¦2 β Y, its inference variance
IV[π] is defined as Var
οΏ½
π
π·
π¦2
οΏ½οΏ½π
π·
π¦1
οΏ½. The optimal inference path in πΊπ· between π¦1 and π¦2, denoted by ππ·
opt(π¦1, π¦2), is the
inference path with the smallest inference variance. The inference distance ππ·(π¦1, π¦2) between π¦1 and π¦2 is defined as
IV[ποΏ½ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
697fcd5e-e902-42dc-9f22-1d7a9eb638a5 | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling
οΏ½οΏ½
π¦2
οΏ½οΏ½π
π·
π¦1
οΏ½. The optimal inference path in πΊπ· between π¦1 and π¦2, denoted by ππ·
opt(π¦1, π¦2), is the
inference path with the smallest inference variance. The inference distance ππ·(π¦1, π¦2) between π¦1 and π¦2 is defined as
IV[ππ·
opt(π¦1, π¦2)]. Similarly, we define πIB(π¦1, π¦2) to be the minimum inference variance of paths leading from π¦1 to π¦2 that
only traverse edges in πΈIB.
Here, the inference variance IV[π] and the inference distance ππ·(π¦1, π¦2) measures the uncertainty over the value of π
π·
π¦2 if
one starts from the value of π
π·
π¦1 and follows the inference path π. They reflect our ability to determine the relative human
preference between π¦1 and π¦2 based on information in π·.
Definition A.7 (Mean Inference Distance). The mean inference distance of a human preference dataset π· is defined by
Eπ¦1,π¦2βY
οΏ½
ππ·(π¦1, π¦2)
οΏ½
, where π¦1, π¦2 are independently and equiprobably drawn.
Remark A.8 (RM Inference and IBN Inference are Analogous). When the training of the RM on π· has converged, every
sample in π· (i.e., every edge in πΈπ·
HP) serves as a soft constraint on πΆ's relative preference between the two compared
passages, since any sample preference that is violated will create gradients that pull away from convergence. Therefore, the
RM policy | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0fcddcc9-33d3-4e55-8722-9db58db455ef | # Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective
## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling
, where π¦1, π¦2 are independently and equiprobably drawn.
Remark A.8 (RM Inference and IBN Inference are Analogous). When the training of the RM on π· has converged, every
sample in π· (i.e., every edge in πΈπ·
HP) serves as a soft constraint on πΆ's relative preference between the two compared
passages, since any sample preference that is violated will create gradients that pull away from convergence. Therefore, the
RM policy that is converged upon represents the joint satisfaction of these soft constraints, which enables the RM to perform
the equivalent of multi-hop inference on πΊπ·. Thus, we consider an RM trained on dataset π· to be approximately equivalent
to an optimal inference machine on the IBN πΊπ·, which allows us to use the mean inference distance as the quality criteria
for datasets.
From now on, we will use the mean inference distance as the criteria for evaluating a dataset's quality. Also note that the
inference variance focuses on the relative preference between two vertices, which avoids the problem of shift-invariant
reward ratings.
**Assumption A.9** (Conditional Independence).: Given any induced Bayesian network $G^{D}$ and any $y_{1},y_{2}\in\mathcal{J}$, the optimal inference path from $y_{1}$ to $y_{2}$ is $S^{D}_{\text{op}}(y_{1},y_{2})$ satisfies the following properties.
$$p\left(R^{D}_{y_{1}},R^{D}_{y_{2}}\big{|}R^{D}_{y_{1}}\right)=p\left(R^{D}_{y_{1}}\big{|}R^{D}_{y_{1}}\right)\cdot p\left(R^{D}_{y_{2}}\big{|}R^{D}_{x_{2}}\right)\tag{5}$$
for $\forall s_{i}$, where $s_{i}$ is a node in optimal inference path $S^{D}_{\text{ | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10184v1.md",
"file_path": "paper_data/2402.10184v1.md",
"file_size": 107340,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |