doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
6c1aba44-e643-423b-b99d-27e55621aa55
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | | | | 0.12 | | | | | \n\n\n ( | | | | | TripleSkip | | | | | ) | | | | | 71.7 | | | | | Β± | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d798ce24-b096-469d-ab47-72e11fdfdff3
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | | | Β± | | | | | 0.26 | | | | | 49.3 | | | | | Β± | | | | | 0.19 | | | | | 77.4 | | | | | Β±
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
37903df2-3c23-47cd-8338-388575cfd4c2
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | | 77.4 | | | | | Β± | | | | | 0.16 | | | | | C | | | | | O | | | | | T-S | | | | | EP
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ed00da57-a361-4bd9-8cc9-c040f5783d39
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | | T-S | | | | | EP | | | | | (Unit : Sentence) | | | | | \n | 69.9 | | | | Β± | | | | | 0.26 | | | | | 44.5 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a5b6db8d-5f5a-4525-a79f-0fc8ed965d23
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | 0.26 | | | | | 44.5 | | | | | Β± | | | | | 0.33 | | | | | 75.9 | | | | | Β± | | | | | 0.12 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8523f5e5-f5f3-4e7d-80da-c817c55c5b3a
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | Β± | | | | | 0.12 | | | | | \n\n | 70.2 | | | | Β± | | | | | 0.22 | | | | | 44.2 | | | | | Β± | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
492ab2be-fb78-4d7d-9caa-b06af5445abb
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | 44.2 | | | | | Β± | | | | | 0.19 | | | | | 74.5 | | | | | Β± | | | | | 0.12 | | | | | \n\n\n | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
89c1fb49-c327-4fe5-950d-2ba965d8ee18
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results .12 | | | | | \n\n\n | | | | | 70.8 | | | | | Β± | | | | | 0.17 | | | | | 45.8 | | | | | Β± | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2e904c83-eea9-41aa-b7f4-71b177920ac1
# Can Separators Improve Chain-Of-Thought Prompting? ## 4.2 Results | | | | | Β± | | | | | 1.00 | | | | | 75.2 | | | | | Β± | | | | | 0.22 | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3229ae25-2a7e-4add-85a9-330fcab7e62b
# Can Separators Improve Chain-Of-Thought Prompting? ## 5 Conclusion In this paper, we propose COT-SEP, a novel method placing separators in a structured format to improve Chain-of-Thought (CoT) prompting popular in large language models. Within this framework, we place separators at the end of each prompt exemplar for better readability for LLMs. We demonstrate that COT-SEP effectively improves CoT prompting in various LLMs across highly complex arithmetic and commonsense benchmarks through multiple experiments employing different separators and structural variations. While our experimental results are only focused on the CoT prompting scenarios, our idea of adding separators within the prompt can be easily plugged into various existing prompting techniques, such as Generated Knowledge Prompting (Liu et al., 2021), Self- Consistency (Wang et al., 2023b), Tree of Thought (Yao et al., 2023; Long, 2023) and GraphPrompt (Liu et al., 2023). We look forward to applying our approach to different prompting strategies in a near future.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e54cc226-8ded-430b-9b48-4d236346743e
# Can Separators Improve Chain-Of-Thought Prompting? ## 6 Limitations Our research relies on CoT prompting (Wei et al., 2022b), which is based on in-context learning. Given CoT prompting, the model outputs a final answer along with a step-by-step reasoning process, allowing the generated CoT to validate the final answer. This process may cause users to excessively trust LLMs, which is critical, as LLMs may often generate incorrect outputs. This may ultimately lead to automation bias. Therefore, users should be aware of the need not to overly rely on LLMs for solving reasoning tasks or to make all decisions, despite receiving aid from them. For our CoT-Sep framework, we utilize various separators, TripleSkip (\n\n\n), TripleHash (###), TripleStar (***), <br>, and <br/>. However, as shown in our various experiments, the results of the LLMs depend greatly on where and how the separators are located within the prompts. Therefore, it is recommended to cautiously place separators in the precise location stated in our framework, otherwise, the accuracy of the outcome may be diminished. Due to the limited access to computational resources, we conducted a single trial using Meta's Llama-2-7b model, while carrying out three trials with OpenAI's GPT models. For the same reason, we only focus on a few datasets to validate our method. We tested on two arithmetic reasoning datasets, GSM8K (Cobbe et al., 2021) and AQuA (Ling et al., 2017), as they are the most challenging arithmetic reasoning benchmarks available, and had the least impressive performance while using vanilla CoT prompting. For validation on another reasoning task, we selected the CSQA (Talmor et al., 2019) commonsense reasoning benchmark, as it is one of the most difficult commonsense reasoning benchmarks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
84ff3b4e-800e-4d74-9454-f774aa775e7b
# Can Separators Improve Chain-Of-Thought Prompting? ## Ethics Statement We foresee no ethical concerns that may arise from the findings of this work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1f11f08a-3f00-43a5-92cd-5fd37e89584f
# Can Separators Improve Chain-Of-Thought Prompting? ## A Experiment Details For Difference In Separator Location In Cot-Sep In this section, we explain our experiment shown in Table 3 in more detail. We provide Fig. 3, which is a detailed figure of Fig. 2 with two exemplars.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1c35cdf3-1c02-440d-8354-ad0253b2d3e5
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars Our prompts are primarily based on the vanilla CoT prompts provided by (Wei et al., 2022b). Full prompts for our experimental setting are included in Table 4 - 24. Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4839d49a-f5ce-4b27-918d-1f49c69395b9
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars = 9. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. Table 5: Full prompt for COT-SEP (TripleSkip) used in our experiments on the GSM8K Arithmetic Reasoning Benchmark. Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. \n \n \n Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. \n \n \n Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. \n \n \n Q: Jason had 20 lollipops
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
31e9ea90-e3f1-47fd-a470-28fce18fdc99
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. \n \n \n Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. \n \n \n Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. \n \n \n Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. \n \n \n Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. \n \n \n Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. \n \n \n Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c764d70d-4603-4eaf-9ae7-fdd201adf634
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. \n \n \n Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. \n \n \n Table 6: Full prompt for COT-SEP (TripleStar) used in our experiments on the GSM8K Arithmetic Reasoning Benchmark. Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. *** Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. *** Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. *** Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. *** Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d27b9e56-c68a-48ec-a2ff-001f6fb125cd
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars answer is 39. *** Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. *** Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. *** Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. *** Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. *** Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. *** Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. ### Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5323c81a-8b96-4bf8-8ae2-81c4d680d614
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars is 8. The answer is 8. *** Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. ### Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. ### Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. ### Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. ### Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. ### Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. ### Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5fa5f73f-fe45-44cb-91db-e931d6b02390
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. ### Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. ### Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. ### Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. <br> Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. <br> Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. <br> Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d1d53455-14df-45e0-a779-a9ff8af698e5
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. <br> Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. <br> Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. <br> Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. <br> Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. <br> Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f028bb9c-a097-48f9-b50f-4b97dc562ea5
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. <br> Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. <br/> Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. <br/> Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.<br/> Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. <br/> Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br/> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. <
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0457c02f-9155-4cc9-8cbc-563bdc33effd
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars . How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br/> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. <br/> Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. <br/> Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. <br/> Table 10: Full prompt for Heterogeneous COT-SEP used in our experiments on the GSM8K Arithmetic Reasoning Benchmark. Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. \n \n \n Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. ### Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b069aa2d-5137-43c1-aa9a-374bf68b00b8
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. \n \n \n Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. ### Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. *** Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. <br> Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. <br/> Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. \n \n \n Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. ### Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fcecc54a-e091-403a-93c3-68858d577701
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars 29. The answer is 29. \n \n \n Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. ### Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. *** Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
33912400-c602-461a-b00f-42019a32d2e3
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars equal to 3/2. The answer is (b). Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). \n \n \n Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). \n \n \n Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3411a110-c71d-4c6a-9c8f-40a3896df290
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). \n \n \n Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). \n \n \n Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). \n \n \n Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). *** Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). *** Q: A person is traveling at 20 km/hr and reached
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
40270e4c-2aa4-47d5-8578-dad01b566cda
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). *** Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). *** Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). *** Benchmark. Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). ### Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
eb41ee3c-92de-4612-b9fd-4b8621527aa1
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars numbers also increases by 10. So the new mean would be 50. The answer is (a). ### Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). ### Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). ### Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). ### Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br> Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a /
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4530b2d8-ab95-4398-8e72-ff0fb1b7ec5c
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars ices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br> Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). <br> Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). <br> Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br> Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br/> Q: If a / b = 3/4 and 8a + 5b = 22
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6a43880d-6cb6-439b-b65b-740d31746c21
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars + 401(3) = 1392. The answer is (b). <br> Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). <br/> Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). <br/> Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). <br/> Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br/> Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
857aa648-56ce-482e-b739-65d4283892fd
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars ) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br/> Q: John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64 A: If 10 is added to each number, then the mean of the numbers also increases by 10. So the new mean would be 50. The answer is (a). \n \n \n Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a. Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2 A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So a is equal to 3/2. The answer is (b). ### Q: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find the distance? Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km A: The distance that the person traveled would have been 20 km/hr * 2.5 hrs = 50 km. The answer is (e). *** Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br> Q: What do people use to absorb extra ink from a fountain pen
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1330c9e7-484f-4289-a9a9-bf0a2f33b42f
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars *** Q: How many keystrokes are needed to type the numbers from 1 to 500? Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788 A: There are 9 one-digit numbers from 1 to 9. There are 90 two-digit numbers from 10 to 99. There are 401 three-digit numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The answer is (b). <br> Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
16e7d2d1-fcba-4084-8040-913c0b368fd8
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). Benchmark. Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). \n \n \n Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). \n \n
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
55e633cd-94d8-41b2-b7e2-c5819f12fded
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). \n \n \n Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). \n \n \n Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). \n \n \n Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). \n \n \n Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). \n \n \n Table 20: Full prompt for COT-SEP (TripleStar) used in our experiments on the CSQA Commonsense
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7028dc5-ad11-4b19-a831-a69d6b1cf897
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars directions. So the answer is (d). \n \n \n Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). \n \n \n Table 20: Full prompt for COT-SEP (TripleStar) used in our experiments on the CSQA Commonsense Reasoning Benchmark. Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). *** Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). *** Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). *** Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). *** Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b3d3d2fa-91de-451f-8061-4b7146ff34f0
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). *** Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). *** Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). *** Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). *** Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). ### Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). ### Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0ff1ac43-8678-4d80-b01a-52807566b6c6
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). ### Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). ### Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). ### Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). ### Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). ### Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). ### Table 22: Full prompt for COT-SEP (<br>) used in our experiments on the CSQA Commonsense Reasoning Benchmark. Q: What do people use to absorb extra ink from a fountain
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
413a03cd-b7d4-4530-8377-af1831a74c29
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). ### Table 22: Full prompt for COT-SEP (<br>) used in our experiments on the CSQA Commonsense Reasoning Benchmark. Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). <br> Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). <br> Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). <br> Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br> Q: Google Maps and other highway
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5382c821-7376-4a9b-973c-1f4ae6ded9a0
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br> Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). <br> Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). <br> Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). <br/> Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). <br/> Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59f91cfa-7773-45b9-88b7-cdf0aa0fca05
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). <br/> Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). <br/> Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br/> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br/> Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). <br/> Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). <br/> Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) ink
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c4d9e31e-6629-44c3-9a2c-46eb8165ad1a
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars ). <br/> Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). <br/> Q: What do people use to absorb extra ink from a fountain pen? Answer Choices: (a) shirt pocket (b) calligrapher's hand (c) inkwell (d) desk drawer (e) blotter A: The answer must be an item that can absorb ink. Of the above choices, only blotters are used to absorb ink. So the answer is (e). \n \n \n Q: What home entertainment equipment requires cable? Answer Choices: (a) radio shack (b) substation (c) television (d) cabinet A: The answer must require cable. Of the above choices, only television requires cable. So the answer is (c). ### Q: The fox walked from the city into the forest, what was it looking for? Answer Choices: (a) pretty flowers (b) hen house (c) natural habitat (d) storybook A: The answer must be something in the forest. Of the above choices, only natural habitat is in the forest. So the answer is (b). *** Q: Sammy wanted to go to where the people were. Where might he go? Answer Choices: (a) populated areas (b) race track (c) desert (d) apartment (e) roadblock A: The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people. So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br/> Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fae1519b-fc0a-4a88-9724-b53856cee2d2
# Can Separators Improve Chain-Of-Thought Prompting? ## B Full List Of Prompt Exemplars . So the answer is (a). <br> Q: Where do you put your grapes just before checking out? Answer Choices: (a) mouth (b) grocery cart (c)super market (d) fruit basket (e) fruit market A: The answer should be the place where grocery items are placed before checking out. Of the above choices, grocery cart makes the most sense for holding grocery items. So the answer is (b). <br/> Q: Google Maps and other highway and street GPS services have replaced what? Answer Choices: (a) united states (b) mexico (c) countryside (d) atlas A: The answer must be something that used to do what Google Maps and GPS services do, which is to give directions. Of the above choices, only atlases are used to give directions. So the answer is (d). \n \n \n Q: Before getting a divorce, what did the wife feel who was doing all the work? Answer Choices: (a) harder (b) anguish (c) bitterness (d) tears (e) sadness A: The answer should be the feeling of someone getting divorced who was doing all the work. Of the above choices, the closest feeling is bitterness. So the answer is (c). ###
{ "creation_datetime": "2024-03-04", "file_name": "2402.10645v1.md", "file_path": "paper_data/2402.10645v1.md", "file_size": 84926, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d53101b7-f46b-4019-99da-d4de02e4a659
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective Tianyi Qiu * Β§ 1 **Fanzhi Zeng** * 1 2 **Jiaming Ji** * 1 **Dong Yan** * 3 Kaile Wang 1 Jiayi Zhou 1 **Han Yang** 1 Josef Dai 1 Xuehai Pan 1 **Yaodong Yang** 1 †
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0082b879-0783-4fa1-a547-6986843053e9
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Abstract There is a trilemma in reinforcement learning from human feedback (RLHF): the incompatibility between highly diverse contexts, low labeling cost, and reliable alignment performance. Here we aim to mitigate such incompatibility through the design of dataset information structures during reward modeling. Specifically, we first reexamine the RLHF process and propose a theoretical framework portraying it as an autoencoding process over text distributions. Our framework formalizes the RLHF objective of ensuring distributional consistency between human preference and large language model (LLM) behavior. Building on this framework, we then systematically investigate the performance impact of information structure in the reward modeling stage of RLHF. To further understand reward generalization in the reward modeling stage, we introduce a new method based on random graph theory that models generalization in the semantic space. A key insight of our analysis is the superiority of the tree-based information structure in reward modeling, compared to chain-based baselines adopted by conventional RLHF methods. We derive that under highly complex contexts with limited data, the tree-based reward model (RM) induces up to Θ(log 𝑛/log log 𝑛) times less variance than chainbased RM where 𝑛 is the dataset size. To validate our theoretical contribution, we demonstrate that on three different NLP tasks, the tree-based RM achieves 65% win rate on average against chainbased baselines. Looking forward, we hope our framework can serve as a step towards understanding goal misgeneralization.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b74b77e7-a707-4a3c-b4cc-69ca2654935d
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 1 Introduction After training on massive datasets, large language models (LLMs) have displayed remarkably general capabilities. Particularly in specific downstream tasks, these models have reached or even exceeded human expert performance (OpenAI, 2023; Yang et al., 2023a; Bai et al., 2023). However, the training process of LLMs faces several issues. One issue is that these models are trained using vast amounts of text data scraped from the internet. Such data spans various domains and specialties, often containing noise, errors, and social biases (Together Computer, 2023; Ji et al., 2023a). Another issue is that LLMs are primarily trained to perform next-token prediction (Touvron et al., 2023), which can result in model behaviors that are unintended and potentially harmful. Therefore, it is crucial to align LLMs with human intentions and values to ensure the safety and trustworthiness of these systems (Ji et al., 2023b). A class of existing methods align LLMs using reward models (RM), trained on human-annotated preference data to represent human preferences. The most notable method within this class, Reinforcement Learning from Human Feedback (RLHF), employs reinforcement learning (RL) to improve determines the combinatorial structure that the pairs (𝑦𝐴, 𝑦𝐡) will follow. Thm. 5.12 and Thm. 5.13, summarized in Table 1, are the key results of this study. the model's responses as judged by the reward model, and balances model optimization with original model fidelity using KL divergence constraints (Christiano et al., 2017; Ziegler et al., 2019; Ouyang et al., 2022; Bai et al., 2022a). RLHF is criticized for its lack of scalability to super-human models (Casper et al., 2023; Burns et al., 2023), but even for current models, RLHF still faces a trilemma: the incompatibility between high task diversity, low labeling cost, and reliable alignment performance (Casper et al., 2023). Some methods, most notably Direct Policy Optimization (DPO) (Rafailov et al., 2023), bypass the reward model using binary cross-entropy for simpler preference learning, and thereby reduce computation costs. Reinforcement Learning from AI Feedback (RLAIF) (Bai et al., 2022b; Lee et al., 2023)
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c6c18e9f-9a27-4865-bd74-c03d9a1e606c
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 1 Introduction Casper et al., 2023; Burns et al., 2023), but even for current models, RLHF still faces a trilemma: the incompatibility between high task diversity, low labeling cost, and reliable alignment performance (Casper et al., 2023). Some methods, most notably Direct Policy Optimization (DPO) (Rafailov et al., 2023), bypass the reward model using binary cross-entropy for simpler preference learning, and thereby reduce computation costs. Reinforcement Learning from AI Feedback (RLAIF) (Bai et al., 2022b; Lee et al., 2023) utilizes AI annotation to reduce annotation costs while maintaining consistency with actual human preferences. These alternative approaches remain constrained by the trilemma above, but all delve into an examination of the preference dataset. That inspires us to characterize the role of the preference dataset's information structure in RLHF from a theoretical perspective, while experimentally validating the efficacy of our theoretically inspired insights. Building upon existing literature, we make the following contributions to the fields of alignment and machine learning theory. - We formalize RLHF as an autoencoding process (Figure 2), and prove a criterion of convergence for this process (Theorem 4.1), stating that under successful reward generalization, both the RM and the post-RLHF LLM converge upon their respective ideal human counterparts. Note that this framework is not contingent on assumptions about information structures, which allows it to be generally applicable. - We propose the induced Bayesian network (IBN, Definition 5.3) for the characterization and analysis of generalization in reward modeling. Drawing from random graph theory and causal analysis, the IBN approach enables empirically grounded analysis of reward generalization, and can derive meaningful bounds (Table 1) without overly strong assumptions on the hypothesis space. Our methods also represent a step towards fully understanding the goal misgeneralization problem (Di Langosco et al., 2022; Shah et al., 2022) in alignment. - We analyze the impact of information structures in RLHF using the IBN method, and, based on this analysis, propose a novel tree-based method for reward modeling. We both formally derive (Theorem 5.12, Theorem 5.13) and experimentally demonstrate (Section 6) the superiority of the tree-based method in diverse contexts with limited data.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
75cae053-d2ca-4847-b3a4-5e64e2a71696
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 2 Related Work RLHF and Alignment Alignment is an area of machine learning research that focuses on ensuring AI systems behave in accordance with human intentions and values (Ji et al., 2023b). RLHF (Christiano et al., 2017; Ouyang et al., 2022; Bai et al., 2022a) is an alignment algorithm that extends Preference-based Reinforcement Learning (Wirth et al., 2017) to align models with human preferences. In the present study, we focus on its application to LLMs. RLHF achieves alignment through RL algorithms that train the policy model (*i.e.,* LLMs) to maximize the cumulative reward from a reward model. Some recent methods aim to streamline RLHF by minimizing (Yuan et al., 2023; Gulcehre et al., 2023) or entirely removing (Rafailov et al., 2023) the reliance on reward models. Concurrently, other research efforts, including those by Bai et al. (2022b) and Lee et al. (2023), focus on using AIs for data annotation to reduce costs. Additionally, there is a drive to refine reward models (Wu et al., 2023), which treat different error rewards as binary classification problems. Generalization in Alignment Di Langosco et al. (2022); Shah et al. (2022) outline the goal misgeneralization problem in RL. Investigating the goal misgeneralization directly in LLMs is challenging, and to the best of our knowledge, there is currently limited related work in this area. Xiong et al. (2024) gives a detailed description of generalization under the strong assumption of linear reward. Under our autoencoding framework of RLHF, we introduce the IBN method to analyze reward generalization in an empirically grounded manner, thus filling a gap within the literature.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
31714785-987c-4fce-afea-771a008f928c
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 3 Preliminaries We start by introducing the prerequisite concepts. Large Language Models The task of LLM generation can be defined with (X, Y, οΏ½, 𝑝LM(Β· | Β· ; πœƒLM)). We consider an LLM to be parameterized by πœƒLM and denoted by the output distribution 𝑝LM(Β· | Β·). The input space (prompt space) is X βŠ‚ �≀𝑙max and the output space is Y βŠ‚ �≀𝑙max, for some constant 𝑙max. The model takes as input a sequence x = (π‘₯0, Β· Β· Β· , π‘₯π‘›βˆ’1), *aka* prompt, to generate the corresponding output (aka response) y = (𝑦0, Β· Β· Β· *, 𝑦*π‘šβˆ’1). π‘₯𝑖 and 𝑦 𝑗 represent individual tokens from a predetermined vocabulary οΏ½. The autoregressive language model $p_{\text{LM}}$ sequentially generates tokens for a given position by relying solely on the sequence of tokens it has previously generated. Consequently, this model can be conceptualized as a Markov decision process, wherein the conditional probability $p_{\text{LM}}(\textbf{y}\mid\textbf{x})$ can be defined through a decomposition as follows. $$p_{\text{LM}}(y_{0}...m-1\mid\textbf{x})=\prod_{0\leq k\leq m}p_{\text{LM}}(y_{k}\mid\textbf{x},y_{0}...k-1)$$ The RLHF Pipeline Using the notations above, we review the RLHF pipeline from Ziegler et al. (2019); Ouyang et al. (2022). It typically consists of three stages. - *Supervised Fine-tuning (SFT).* RLHF begins with a pretrained language model, which is then fine-tuned via supervised learning, especially using maximum likelihood estimation, on a high-quality human instruction dataset designed for downstream tasks. This process results in a model 𝑝SFT(Β· | Β· ; πœƒSFT). - Collect
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6608281b-19ec-4970-9779-2d992c4b6414
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 3 Preliminaries HF Pipeline Using the notations above, we review the RLHF pipeline from Ziegler et al. (2019); Ouyang et al. (2022). It typically consists of three stages. - *Supervised Fine-tuning (SFT).* RLHF begins with a pretrained language model, which is then fine-tuned via supervised learning, especially using maximum likelihood estimation, on a high-quality human instruction dataset designed for downstream tasks. This process results in a model 𝑝SFT(Β· | Β· ; πœƒSFT). - Collecting Comparison Data and Reward Modeling. This phase involves the collection of comparison data, essential for training the RM π‘ŸRM(Β·|Β·). The process starts with the model 𝑝SFT(y | x), which generates response pairs (y1, y2) from given prompts x. Human annotators are then tasked with selecting their preferred response from each pair, denoted as y𝑀 ≻ y𝑙 | x, where y𝑀 and y𝑙 denotes the preferred and dispreferred answer amongst (y1, y2). - *Policy Optimization via RL.* The final step is optimizing the LLM via RL, guided by the reward model | 26 βˆ’ 4 = | 𝟐𝟐 | |------------|-------| | 15 βˆ’ 3 = | 𝟏𝟐 | | 26 βˆ’ 8 = | πŸπŸ– | π‘ŸRM(Β·|Β·). The process of LLMs generating responses from prompts is modeled as a bandit setting (Ouyang et al., 2022), where a reward is obtained from the reward model π‘ŸRM(Β·|Β·) at the end of each response. The primary objective of RL is to adjust the parameters πœƒLM of the LLM so that the expected reward on the training prompt distribution PX is maximized. That is, $\theta_{\rm LM}=\arg\max\limits_{\theta}\ {\rm E}_{\rm z-\beta x},y\sim p_{\rm LM}(\cdot|x\ ;\ \theta)\ [r_{
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
76cd4de7-b6f8-4ebe-a0de-53a42c8d0330
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 3 Preliminaries 2022), where a reward is obtained from the reward model π‘ŸRM(Β·|Β·) at the end of each response. The primary objective of RL is to adjust the parameters πœƒLM of the LLM so that the expected reward on the training prompt distribution PX is maximized. That is, $\theta_{\rm LM}=\arg\max\limits_{\theta}\ {\rm E}_{\rm z-\beta x},y\sim p_{\rm LM}(\cdot|x\ ;\ \theta)\ [r_{\rm RM}\ (y\ |\ x)]$ Chain-based and Tree-based Information Structures In the reward modeling stage of RLHF, we define information structures to be the structures of the information flow that generates the RM π‘ŸRM(Β·) from the idealized human text distribution 𝑝H(Β·) (Section 4). Concretely speaking, in the present study, we focus on the combinatorial structure of the human preference dataset, as a key aspect of the more broadly-defined information structure. Given a prompt π‘₯, the generation process of the chain-based preference dataset involves independently sampling pairs of responses for comparison to form the human preference dataset. On the other hand, the generation process of the tree-based preference dataset involves sampling a complete tree of responses to prompt π‘₯, where each node contains only one sentence and each non-leaf node has the same number of child nodes. The tree-based preference dataset is created by randomly selecting any two complete responses from the root node to some leaf node, and then using the response pair for comparison. Figure 4 gives an illustration of the two processes.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
64414432-d47c-4d20-a222-c527149b29a8
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 4 Formulating The Rlhf Process Due to our focus on the combinatorial structure of preference data as opposed to the distribution of prompts, we will offer a formulation for RLHF in the context of any fixed prompt π‘₯ ∈ X for simplicity. This approach can be seamlessly adapted to accommodate scenarios with varying prompts. We consider the RLHF pipeline to consist of the following key elements in their order of appearance.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2c4fdd36-635f-47f5-82cb-731542e690eb
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 It represents the probabilities of every possible response from an idealized human being whose behavior is in perfect alignment with collective human preferences. Note that the question of how we can determine this distribution is not within the scope of this paper, since our analysis does not rely on the specifics of this distribution. Based on a straightforward generalization of the Bradley- Terry model (Bradley & Terry, 1952), we can further define the *idealized human reward function* π‘ŸH : Y β†’ R satisfying 𝑝H(𝑦0) = exp (π›½π‘ŸH(𝑦0)) οΏ½ π‘¦βˆˆY exp (π›½π‘ŸH(𝑦)) 𝑖=1. Human Preference Dataset 𝐷 = οΏ½ (𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, 𝛿𝐷,𝑖) οΏ½|𝐷| Here, all 𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖 are elements of Y drawn in specific ways (depending on the information structure used, which we will specify in Section 5),2 and given 𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, we have log 𝑝H(𝑦A 𝐷,𝑖) 𝛽 𝛿𝐷,𝑖 ∼ Logistic οΏ½ οΏ½ 𝑝H(𝑦B 𝐷,𝑖) , 1 𝛽 = Logistic οΏ½ π‘ŸH(𝑦A 𝐷,𝑖) βˆ’ π‘ŸH(𝑦B 𝐷,𝑖), 1 οΏ½ where Logistic(*πœ‡, 𝑠*) stands for a logistic distribution with mean πœ‡ and scale 𝑠, and the random variable 𝛿𝐷,𝑖 stands for the score difference between 𝑦A 𝐷,𝑖 and 𝑦B 𝐷,𝑖 as estimated
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
734802dc-b6a5-47e9-b11b-6bbdbfbef2a6
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 = Logistic οΏ½ π‘ŸH(𝑦A 𝐷,𝑖) βˆ’ π‘ŸH(𝑦B 𝐷,𝑖), 1 οΏ½ where Logistic(*πœ‡, 𝑠*) stands for a logistic distribution with mean πœ‡ and scale 𝑠, and the random variable 𝛿𝐷,𝑖 stands for the score difference between 𝑦A 𝐷,𝑖 and 𝑦B 𝐷,𝑖 as estimated by a human evaluator. The randomness here is due to the widespread presence of noise in human evaluation data. The fact that $\delta_{D,i}$ follows such a logistic distribution is, again, a corollary of the Bradley-Terry model (**Bradley & Terry, 1952**), since it is the only distribution that satisfies $$\mathrm{P}\left[\delta_{D,i}>0\right]=\frac{\exp\left(\beta r_{\mathrm{H}}(y_{D,i}^{\mathrm{A}})\right)}{\exp\left(\beta r_{\mathrm{H}}(y_{D,i}^{\mathrm{A}})\right)+\exp\left(\beta r_{\mathrm{H}}(y_{D,i}^{\mathrm{B}})\right)}$$ regardless of the values that $r_{\mathrm{H}}(y_{D,i}^{\mathrm{A}}),r_{\mathrm{H}}(y_{D,i}^{\mathrm{B}})$ take. In practice, the strength of human preference is usually collected as discrete integer values or even binary labels, 1By default, we will represent a probability distribution with its probability density function (PDF) or probability mass function (PMF), and will denote with Ξ” [𝑆] the space of all PDFs or PMFs over 𝑆 (*i.e.*, all distributions over 𝑆), depending on whether 𝑆 is a set of discrete elements or not. 2Below, we will not distinguish between π‘¦βˆ— 𝐷
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e2a01806-80ac-4b8f-9c44-a6a0b5f778d8
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 take. In practice, the strength of human preference is usually collected as discrete integer values or even binary labels, 1By default, we will represent a probability distribution with its probability density function (PDF) or probability mass function (PMF), and will denote with Ξ” [𝑆] the space of all PDFs or PMFs over 𝑆 (*i.e.*, all distributions over 𝑆), depending on whether 𝑆 is a set of discrete elements or not. 2Below, we will not distinguish between π‘¦βˆ— 𝐷,𝑖 as elements of Y and as random variables taking values in Y. The meaning should be clear from the context. We will also adopt this convention for other similar variables. which can be seen as discretized 𝛿𝐷,𝑖. In any given case, the finer-grained this discretization process is, the more applicable our model will be. Reward Model π‘ŸRM(Β·). The reward model can be seen as a finite-sample estimator of π‘ŸH based on 𝐷. It is a functionvalued random variable that takes values in RY and depends on 𝐷. It follows the distribution π‘π‘ŸRM ∈ Ξ” οΏ½ RYοΏ½. We can equivalently view π‘ŸRM(Β·) as a mapping that maps every 𝑦 ∈ Y to a real-valued random variable, and π‘π‘ŸRM as the joint distribution of those random variables. One could obtain π‘ŸRM using Bayesian inference on π‘ŸH,3 $P_{\rm FH}(y_{D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$) $P_{\rm FH}(y_{D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$) R 𝑝�
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3bd87643-89ff-471c-8c99-ea59db39dc9c
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$) $P_{\rm FH}(y_{D,i}^{\rm A})\parallel n_{\rm H}(y_{D,i}^{\rm B})$=40, $\delta_{D,i}$=40 ($v_{0}$) R π‘π‘ŸH(𝑦A 𝐷,𝑖)|𝑒0(𝑣) Β· 𝑝 𝛿𝐷,𝑖 |𝑒0,𝑣(𝑑0)d𝑣 ∫ R 𝑝 𝛿𝐷,𝑖 |𝑒0,𝑣(𝑑0)d𝑣 = 𝛽 exp (𝛽(𝑣0 βˆ’ 𝑒0 βˆ’ 𝑑0)) [1 + exp (𝛽(𝑣0 βˆ’ 𝑒0 βˆ’ 𝑑0))]2 = 𝑝 𝛿𝐷,𝑖 |𝑒0,𝑣0 (𝑑0) ∫ assuming a uniform prior π‘π‘ŸH(𝑦A 𝐷,𝑖) |π‘ŸH(𝑦B 𝐷,𝑖)=𝑒0(Β·).4 Therefore, we have obtained the posterior distribution after observing one single sample (𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, 𝛿𝐷,𝑖), $$r_{\rm H}(y_{D,i}^{\rm A})\mid r_{\rm H}(y_{D,i}^{\rm B}),\delta_{D,i}$$ $$\sim\ {\rm Logistic}\left(r_{\rm H}(y_{D,i}^{\rm
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
47d6c7b5-8e13-4506-9d30-f80d6066ae7b
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 posterior distribution after observing one single sample (𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, 𝛿𝐷,𝑖), $$r_{\rm H}(y_{D,i}^{\rm A})\mid r_{\rm H}(y_{D,i}^{\rm B}),\delta_{D,i}$$ $$\sim\ {\rm Logistic}\left(r_{\rm H}(y_{D,i}^{\rm B})+\delta_{D,i},\frac{1}{\beta}\right)\tag{1}$$ pairs. We will take this step in Section 5. Note that this relationship is not sufficient for constructing the entire function π‘ŸRM, since the inference above is only at the level of response pairs, while a full-fledged inference process should work at the model level, taking into account the interdependence between different οΏ½π‘ŸH(𝑦A 𝐷,𝑖), π‘ŸH(𝑦B 𝐷,𝑖)οΏ½ Language Model 𝑝LM(Β·). The language model is RLHF- tuned from the post-SFT model based on rewards from π‘ŸRM. We characterize it as a function-valued random variable that takes values in Ξ” [Y] and depends on π‘ŸRM. We can equivalently view 𝑝LM(Β·) as a mapping that maps every 𝑦 ∈ Y to a real-valued random variable 𝑝LM(𝑦),5 and it holds that οΏ½ 𝑦 𝑝LM(𝑦) ≑ 1. Figure 2 gives a visualization of the full framework. We consider the process 𝑝H(Β·) β†’ π‘ŸH(Β·) β†’ 𝑝 𝛿|𝑦𝐴,𝑦𝐡 (Β·) to be 3When writing conditional probabilities, we may abbreviate the condition π‘ŸH(𝑦B 𝐷,𝑖) = 𝑒0 with 𝑒0, and likewise for οΏ½
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1ff405ee-bc30-4a21-b613-209ac2b740b3
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 𝑝LM(𝑦) ≑ 1. Figure 2 gives a visualization of the full framework. We consider the process 𝑝H(Β·) β†’ π‘ŸH(Β·) β†’ 𝑝 𝛿|𝑦𝐴,𝑦𝐡 (Β·) to be 3When writing conditional probabilities, we may abbreviate the condition π‘ŸH(𝑦B 𝐷,𝑖) = 𝑒0 with 𝑒0, and likewise for π‘ŸH(𝑦A 𝐷,𝑖) = 𝑣0 and 𝛿𝐷,𝑖 = 𝑑0. 4To be exact, here π‘π‘ŸH(𝑦A 𝐷,𝑖) |π‘ŸH(𝑦B 𝐷,𝑖)=𝑒0 (Β·) is uniform on [βˆ’πΏ, 𝐿] for a large 𝐿 ∈ R+, and the derivation above concerns the limit at 𝐿 β†’ +∞. 5These random variables are not mutually independent. inherent in the generation of human preference data. Our learning process 𝐷 = {(𝑦𝐴, 𝑦𝐡, 𝛿)} β†’ π‘ŸRM(𝑦) β†’ 𝑝LM(𝑦), on the other hand, is a mirror image of the preference generation process. π‘ŸRM(Β·) can be seen as a finite-sample Bayes estimator of π‘ŸH(Β·), while 𝑝LM(Β·) can be viewed as an approximation of 𝑝H(Β·). We demonstrate this correspondence with the following convergence theorem. Theorem 4.1. *If the reward modeling process (*i.e., the encoding process) satisfies that $\begin{array}{c}\mbox{lim sup Var}\left[r_{\rm RM}(y_{1})\mid r_{\rm RM}(y_{2})\right]=0\\ \mbox{$|D|\rightarrow
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
076bc424-0bf5-4938-8d72-f55ebf081e1d
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 π‘ŸH(Β·), while 𝑝LM(Β·) can be viewed as an approximation of 𝑝H(Β·). We demonstrate this correspondence with the following convergence theorem. Theorem 4.1. *If the reward modeling process (*i.e., the encoding process) satisfies that $\begin{array}{c}\mbox{lim sup Var}\left[r_{\rm RM}(y_{1})\mid r_{\rm RM}(y_{2})\right]=0\\ \mbox{$|D|\rightarrow+\infty$}\;y_{1},y_{2}\in y\end{array}$ and the policy optimization process (i.e., the decoding process) performs 𝛽-entropy-regularized RL, or, in other words, $$\begin{array}{l}\mbox{E}_{y\sim p_{\rm LM}}\left[r_{\rm RM}(y)\right]+\beta{\rm H}_{y\sim p_{\rm LM}}\left[y\right]\\ =\sup_{p_{\rm LM}^{\prime}\in\Delta[\,y]}\left({\rm E}_{y\sim p_{\rm LM}^{\prime}}\left[r_{\rm RM}(y)\right]+\beta{\rm H}_{y\sim p_{\rm LM}^{\prime}}\left[y\right]\right)\end{array}\tag{2}$$ then, when the dataset size |𝐷| β†’ +∞, π‘ŸRM(𝑦1) βˆ’ π‘ŸRM(𝑦2) 𝑃→ π‘ŸH(𝑦1) βˆ’ π‘ŸH(𝑦2) 𝑝LM(𝑦) 𝑑→ 𝑝H(𝑦) uniformly for all (𝑦1, 𝑦2) ∈ Y2 and for all 𝑦 ∈ Y. Proof Sketch. The convergence-in-probability of π‘ŸRM can be proven using the independence between π‘ŸRM
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
146086a4-df1e-4273-bb3a-0a367285a0d7
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Idealized Human Text Distribution 𝑝h : Y β†’ Rβ‰₯0.1 𝑦2) 𝑃→ π‘ŸH(𝑦1) βˆ’ π‘ŸH(𝑦2) 𝑝LM(𝑦) 𝑑→ 𝑝H(𝑦) uniformly for all (𝑦1, 𝑦2) ∈ Y2 and for all 𝑦 ∈ Y. Proof Sketch. The convergence-in-probability of π‘ŸRM can be proven using the independence between π‘ŸRM(𝑦2) and π‘ŸRM(𝑦1) βˆ’ π‘ŸRM(𝑦2) (Lemma A.10) and then applying tail inequalities. See Proposition A.23 for a more detailed proof. The convergence-in-distribution of 𝑝LM can be proven by deriving the solution for (2) and then analyzing error propa- gation. See Proposition A.24 for a more detailed proof. β–‘
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
38d179c3-6d3b-4418-9ce5-b4ff34f5dcd4
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5 Analysis Of Information Structures In Reward Modeling In this section, we continue to work within the framework proposed in Section 4, and zoom in on the encoding stage with a focus on information structures. For the simplicity of notation, we will use 𝑅𝐷 𝑦 as an abbreviation for the random variable π‘ŸRM(𝑦) under the human preference dataset 𝐷. We provide a formal model of information structure and its impact on reward modeling. Using this model, we go on to analyze chain-based and tree-based information structures as case studies. Due to space constraints, we will selectively present key definitions, assumptions, and theorems. Please refer to Appendix A for the complete derivations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
21084da9-d659-4752-8d1a-524c18b57d9f
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation We start by giving a model of inductive biases in a pretrained language model, since such a model serves as the starting point of reward model training. This will allow us to provide more realistic bounds on the generalization error of the reward model training process. Definition 5.1 (Hypothesis Distribution). Given response set Y, the hypothesis distribution PHypothesis is a probability distribution over space RY. Here, PHypothesis stands for the distribution of the reward function which can be obtained by finetuning the pretrained language models. Definition 5.2 (Inductive Bias Edge Set). Given response set Y and hypothesis distribution PHypothesis(Β·), the inductive bias edge set 𝐸IB is defined as follows. οΏ½edge �𝑦𝑖, 𝑦 𝑗, 𝛿𝑖, 𝑗 οΏ½ ∈ 𝐸IB οΏ½ ⇐⇒ οΏ½ πΌβ„ŽβˆΌPHypothesis [β„Ž(𝑦1), β„Ž(𝑦2)] > 𝐢 οΏ½ for βˆ€π‘¦π‘–, 𝑦 𝑗, 𝑖 β‰  𝑗, 𝑖, 𝑗 ∈ {1, 2, ..., |Y|}. 𝐢 is a constant that provides a lower bound on the mutual information of any edge in 𝐸IB over distribution PHypothesis. 𝑖=1, we define 𝐷's induced Bayesian net- We define the inductive bias edge set 𝐸IB to characterize the a priori correlations between elements in Y before obtain- ing human rewards. The relevance may stem from factors such as semantic similarity among elements in Y, since a pretrained language model possesses internal representa- tions of semantic features. Definition 5.3 (Induced Bayesian Network). Given re- sponse set Y and any human preference dataset 𝐷 = οΏ½ (𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, 𝛿𝐷,𝑖) οΏ½|οΏ½
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dcb4240f-f038-41aa-9851-5d149712b6eb
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation a priori correlations between elements in Y before obtain- ing human rewards. The relevance may stem from factors such as semantic similarity among elements in Y, since a pretrained language model possesses internal representa- tions of semantic features. Definition 5.3 (Induced Bayesian Network). Given re- sponse set Y and any human preference dataset 𝐷 = οΏ½ (𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, 𝛿𝐷,𝑖) οΏ½|𝐷| work (IBN) 𝐺𝐷(Y, 𝐸𝐷) as a graph with vertex set Y and edge set 𝐸𝐷 = 𝐸IB βˆͺ 𝐸𝐷 HP. The human preference edge set 𝐸𝐷 HP is defined as 𝐸𝐷 HP = οΏ½ (𝑒𝐷 𝑗 , 𝑣𝐷 𝑗 , π‘Š 𝐷 𝑗 ) : 𝑗 = 1 *. . .* 2|𝐷| οΏ½ where the 𝑗-th edge connects 𝑒𝐷 𝑗 with 𝑣𝐷 𝑗 and contains information π‘Š 𝐷 𝑗 . Here, (𝑒𝐷 𝑗 , 𝑣𝐷 𝑗 ) =  οΏ½ 𝑦A 𝐷,π‘˜, 𝑦B 𝐷,π‘˜ οΏ½ if 𝑗 = 2π‘˜ βˆ’ 1 οΏ½ 𝑦B 𝐷,π‘˜, 𝑦A 𝐷,π‘˜ οΏ½ if 𝑗 = 2π‘˜  and π‘Š 𝐷 𝑗 (Β·|Β·) = 𝑝𝑅𝐷 𝑣𝐷 𝑗 |𝑅𝐷 𝑒𝐷
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1dbcb1e4-ac5e-4735-b128-f9aa06d0847c
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation 𝑦B 𝐷,π‘˜ οΏ½ if 𝑗 = 2π‘˜ βˆ’ 1 οΏ½ 𝑦B 𝐷,π‘˜, 𝑦A 𝐷,π‘˜ οΏ½ if 𝑗 = 2π‘˜  and π‘Š 𝐷 𝑗 (Β·|Β·) = 𝑝𝑅𝐷 𝑣𝐷 𝑗 |𝑅𝐷 𝑒𝐷 𝑗 (Β·|Β·) is a conditional distribution determined by 𝛿𝐷,⌈ π‘—βŒ‰. Here, specifying the conditional distributions instead of joint distributions avoids issues caused by the shift-invariance of reward scores. In the induced Bayesian network that we define, the edges between any two points are bidirectional. Therefore, in the subsequent sections, we generally consider the induced Bayesian network as an undirected graph without loss of generality. Assumption 5.4 (Information of an Edge Induces a Logistic Distribution). Given any dataset 𝐷 and induced Bayesian network 𝐺𝐷(Y, 𝐸𝐷), we assume that whether the edge from 𝑦1 to 𝑦2 belongs to 𝐸IB or 𝐸𝐷 HP, the information π‘Š 𝐷 = 𝑝𝑅𝐷 𝑦2 |𝑅𝐷 𝑦1 (Β·|Β·) is the probability density function of a logistic distribution, which means 𝛽(𝑦1,𝑦2) οΏ½ if (𝑦1, 𝑦2) ∈ 𝐸IB Logistic οΏ½ π‘Ÿ, 1 𝛽HP 𝑅𝐷 𝑦2 | 𝑅𝐷 𝑦1 = π‘Ÿ ∼  Logistic οΏ½ π‘Ÿ + 𝛿, 1 οΏ½ if (𝑦1, 𝑦2) ∈ 𝐸𝐷
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d0171d96-1d17-473c-871a-6adc295eb444
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation οΏ½οΏ½(𝑦1,𝑦2) οΏ½ if (𝑦1, 𝑦2) ∈ 𝐸IB Logistic οΏ½ π‘Ÿ, 1 𝛽HP 𝑅𝐷 𝑦2 | 𝑅𝐷 𝑦1 = π‘Ÿ ∼  Logistic οΏ½ π‘Ÿ + 𝛿, 1 οΏ½ if (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP  where 𝛽(𝑦1,𝑦2) is a constant related to (𝑦1, 𝑦2), 𝛽HP is a constant related to 𝐸𝐷 HP and 𝛿 is related to (𝑦1, 𝑦2), which represents human preference between 𝑦1 and 𝑦2. Here we assume that human preferences exhibit a certain degree of stability, which means that for any (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP, 𝛽HP has upper and lower bounds. Since our analysis only concerns the asymptotic order of statistical quantities, we can assume without loss of generality that for any (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP, constant 𝛽HP is independent of 𝐸𝐷 HP. Note that the claim of 𝑅𝐷 𝑦2|𝑅𝐷 𝑦1 = π‘Ÿ following a logistic distribution when (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP is provided with support in (1) as a corollary of the Bradley-Terry model (Bradley & Terry, 1952). Definition 5.5 (Inference Path). Given any dataset 𝐷 and 𝑦1 ∈ Y, 𝑦2 ∈ Y, we call a sequence of edges 𝑆 = {(𝑠𝑖, 𝑑𝑖,
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
31fac164-dc9e-4b3e-8ec4-1abd08362933
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation π‘Ÿ following a logistic distribution when (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP is provided with support in (1) as a corollary of the Bradley-Terry model (Bradley & Terry, 1952). Definition 5.5 (Inference Path). Given any dataset 𝐷 and 𝑦1 ∈ Y, 𝑦2 ∈ Y, we call a sequence of edges 𝑆 = {(𝑠𝑖, 𝑑𝑖, π‘Šπ‘–) ∈ 𝐸𝐷 : 𝑖 = 1 . . . π‘˜} an *inference path* from 𝑦1 to 𝑦2 if 𝑦1 = 𝑠1, π‘‘π‘˜ = 𝑦2, and 𝑠𝑖 = 𝑑𝑖+1, βˆ€*𝑖 < π‘˜*. Assuming the independence between 𝑅𝐷 𝑠𝑖 and 𝑅𝐷 𝑑𝑖+1 conditional on 𝑅𝐷 𝑠𝑖+1 (Assumption 5.9), one can uniquely determine the conditional distribution 𝑝𝑅𝑦2 |𝑅𝑦1 (Β·|Β·) based on {π‘Šπ‘– : 𝑖 = 1 *. . . π‘˜*}, which we denote with π‘Šπ‘†(Β·|Β·). There could be multiple possible inference paths between any pair of vertices. To choose the best one among them, we need to define the *inference variance*. Definition 5.6 (Inference Distance). Given any inference path 𝑆 in 𝐺𝐷 going from 𝑦1 ∈ Y to 𝑦2 ∈ Y, its inference variance IV[𝑆] is defined as Var οΏ½ 𝑅𝐷 𝑦2 ��𝑅𝐷 𝑦1 οΏ½ . The optimal inference path in 𝐺𝐷 between οΏ½
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ffc19e88-74ea-43d1-bcd0-695121927e8f
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation vertices. To choose the best one among them, we need to define the *inference variance*. Definition 5.6 (Inference Distance). Given any inference path 𝑆 in 𝐺𝐷 going from 𝑦1 ∈ Y to 𝑦2 ∈ Y, its inference variance IV[𝑆] is defined as Var οΏ½ 𝑅𝐷 𝑦2 ��𝑅𝐷 𝑦1 οΏ½ . The optimal inference path in 𝐺𝐷 between 𝑦1 and 𝑦2, denoted by 𝑆𝐷 opt(𝑦1, 𝑦2), is the inference path with the smallest inference variance. The inference distance 𝑑𝐷(𝑦1, 𝑦2) between 𝑦1 and 𝑦2 is defined as IV[𝑆𝐷 opt(𝑦1, 𝑦2)]. Similarly, we define 𝑑IB(𝑦1, 𝑦2) to be the minimum inference variance of paths leading from 𝑦1 to 𝑦2 that only traverse edges in 𝐸IB. Here, the inference variance IV[𝑆] and the inference distance 𝑑𝐷(𝑦1, 𝑦2) measures the uncertainty over the value of 𝑅𝐷 𝑦2 if one starts from the value of 𝑅𝐷 𝑦1 and follows the inference path 𝑆. They reflect our ability to determine the relative human preference between 𝑦1 and 𝑦2 based on information in 𝐷. Definition 5.7 (Mean Inference Distance). The mean inference distance of a human preference dataset 𝐷 is defined structures, different structural functions, and different variance regimes. Each cell contains the mean inference distance under that setting. The variance regime π’œ denotes the case when the variances of 𝐸IB paths are lower-bounded by a constant, and the variance regime ℬ denotes the case when the variances of 𝐸IB paths become π‘œ(1). Observe
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9d11e1ae-174e-4417-92c0-49eb48f52f97
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation 1 and 𝑦2 based on information in 𝐷. Definition 5.7 (Mean Inference Distance). The mean inference distance of a human preference dataset 𝐷 is defined structures, different structural functions, and different variance regimes. Each cell contains the mean inference distance under that setting. The variance regime π’œ denotes the case when the variances of 𝐸IB paths are lower-bounded by a constant, and the variance regime ℬ denotes the case when the variances of 𝐸IB paths become π‘œ(1). Observe that in case π’œ of F ∼ 𝐼 Β· π‘€βˆ’π›Ό (the structural function F is defined in Definition 5.10), tree-based information structure outperforms chain-based information structure by a factor of (log |𝐷|)1βˆ’π›Ό (log log |𝐷|)βˆ’1 = πœ”(1), while in case ℬ the the latter information structure outperforms the former by (log |𝐷|)2𝛼/(2+𝛼) = πœ”(1). In all other cases, the two have asymptotically equivalent performance. This suggests that the comparative advantage of tree-based information structure is learning in highly diverse contexts (*i.e.*, F ∼ 𝐼 Β· π‘€βˆ’π›Ό) from limited human preference data (*i.e.*, case π’œ). | Chain-based RM | Tree-based RM | |----------------------|-----------------| | π’œ | | | (Large Var.) | | | ℬ | | | (Infinitesimal Var.) | | | π’œ
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cce63aa1-f407-4db2-b687-72d21d2ecb7a
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation | | (Large Var.) | | | ℬ | | | (Infinitesimal Var.) | | | π’œ | | | (Large Var.) | | | ℬ | | | (Infinitesimal Var.) | | | 2 | | | + | | | 𝛼 | | | | | | | 𝐷 | | | | | | | οΏ½
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8083df33-06be-418c-9bd0-aaed9cc2c51b
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation | | | | | | 𝐷 | | | | | | | 𝛼 | | | F ∼ | | | 𝐼 | | | Β· | | | 𝑀 | | | βˆ’ | | | 𝛼 | | | 𝑂 | | | οΏ½ | | | 𝐼
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
422cb340-a149-478a-9098-cd8c33e546fc
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation | | | 𝑂 | | | οΏ½ | | | 𝐼 | | | Β·( | | | log | | | | | | | 𝐷 | | | |) | | | 1 | | | + | | | 𝛼 | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c13408eb-5bba-4d42-92ef-53995807b9d9
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation | | + | | | 𝛼 | | | | | | | 𝐷 | | | | | | | | 𝛼 | | | | | log log | | | | | | | 𝐷 | | | | | | | οΏ½ | | | 𝑂
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cc426764-c0db-4f1a-a1eb-df69f707d11e
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation οΏ½ | | | | | | | οΏ½ | | | 𝑂 | | | οΏ½ | | | 𝐼 | | | 2 | | $\mathcal{F}\sim I\cdot\left(\log M\right)^{-\alpha}$$O\left(I\cdot\left(\log\left|D\right|\right)^{-\alpha}\right)$$O\left(I\cdot\left(\log\left|D\right|\right)^{-\alpha}\right)$$\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\alpha}\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b1b272a7-3a21-4793-8269-3b1200eca090
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation \omega\left(\left(\log M\right)^{-\alpha}\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$$O\left(\mathcal{F}\left(\left|\left|\left|D\right|^{\frac{1}{2}}\right|\right)\right)$ by E𝑦1,𝑦2∈Y οΏ½ 𝑑𝐷(𝑦1, 𝑦2) οΏ½ , where 𝑦1, 𝑦2 are independently and equiprobably drawn. Remark 5.8 (RM Inference and IBN Inference are Analo- gous). When the training of the RM on 𝐷 has converged, every sample in 𝐷 (i.e., every edge in 𝐸𝐷 HP) serves as a soft constraint on 𝐢's relative preference between the two compared passages, since any sample preference that is vio- lated will create gradients that pull away from convergence. Therefore, the RM policy that is converged upon represents the joint satisfaction of these soft constraints, which enables the RM to perform the equivalent of multi-hop inference on 𝐺𝐷. Thus, we consider an RM trained on dataset 𝐷 to be approximately equivalent to an optimal inference machine on the IBN 𝐺𝐷, which allows us to use the mean inference distance as the quality criteria for datasets. From now on, we will use the mean inference distance as the criteria for evaluating a dataset's quality. Also note that the inference variance focuses on the relative preference between two vertices, which avoids the problem of shift- invariant reward ratings. **Assumption 5.9** (Conditional Independence).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
db42becb-6078-4e6c-8db8-864198aed58b
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation οΏ½οΏ½. Thus, we consider an RM trained on dataset 𝐷 to be approximately equivalent to an optimal inference machine on the IBN 𝐺𝐷, which allows us to use the mean inference distance as the quality criteria for datasets. From now on, we will use the mean inference distance as the criteria for evaluating a dataset's quality. Also note that the inference variance focuses on the relative preference between two vertices, which avoids the problem of shift- invariant reward ratings. **Assumption 5.9** (Conditional Independence).: Given any induced Bayesian network $G^{D}$ and any $y_{1},y_{2}\in\mathcal{J}$, the optimal inference path from $y_{1}$ to $y_{2}\;S^{D}_{\text{opt}}(y_{1},y_{2})$ satisfies the following properties. $$p\left(R^{D}_{y_{1}},R^{D}_{y_{2}}\big{|}R^{D}_{s_{i}}\right)=p\left(R^{D}_{y_{1}}\big{|}R^{D}_{s_{i}}\right)\cdot p\left(R^{D}_{y_{2}}\big{|}R^{D}_{s_{i}}\right)$$ for $\forall s_{i}$, where $s_{i}$ is a node in optimal inference path $S^{D}_{\text{opt}}(y_{1},y_{2})$. Note that this assumption is stronger than typical conditional independence assumptions, in that it ignores correlations caused by non-optimal paths, which have a smaller influ- 2+𝛼 (log |𝐷|) 2𝛼 2+𝛼 2+𝛼 2+𝛼 |𝐷| 𝛼 οΏ½ |𝐷| 𝛼 οΏ½ 𝑂 οΏ½ 𝐼 2 οΏ½ 𝑂 οΏ½ 𝐼·(log |𝐷|)2𝛼 (log |𝐷|) πœ– οΏ½οΏ½οΏ½ ence on the inference result
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
36dcb450-8a7b-40da-b560-196d364aa4e4
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.1 The Ibn Formulation , which have a smaller influ- 2+𝛼 (log |𝐷|) 2𝛼 2+𝛼 2+𝛼 2+𝛼 |𝐷| 𝛼 οΏ½ |𝐷| 𝛼 οΏ½ 𝑂 οΏ½ 𝐼 2 οΏ½ 𝑂 οΏ½ 𝐼·(log |𝐷|)2𝛼 (log |𝐷|) πœ– οΏ½οΏ½οΏ½ ence on the inference result. It should be viewed as an approximation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e345a555-d4dd-49d7-a152-d5f7d026c759
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.2 Analysis Of Two Information Structures Definition 5.10 (Structural Function). Given any 𝑀 ∈ Z+, let F (𝑀) be the smallest 𝑑 ∈ R+ such that there exists a partition C1, Β· Β· Β· , C𝑀 (C𝑖 βŠ† Y) of Y satisfying6 E𝑦1,𝑦2∈C𝑖 [𝑑IB(𝑦1, 𝑦2)] ≀ 𝑑, βˆ€π‘– $$\frac{1}{2M}\leq\frac{|C_{i}|}{|\mathcal{Y}|}\leq\frac{2}{M},\quad\forall1\leq i\leq M$$ We will call F the *structural function*, since its asymptotic behavior reveals structural properties of 𝐸IB. Remark 5.11 (Intuition on the Structural Function). The asymptotic behavior of F can be understood as a measure of the degree of isolation and decentralization in the graph 𝐺′(Y, 𝐸IB). Extremely dense graphs or centralized graphs, such as a clique or a star graph, possess an asymptotically constant F . Extremely decentralized graphs, such as a long chain, have F (𝑀) = Θ οΏ½π‘€βˆ’1οΏ½. Therefore, when F (𝑀) ∼ 𝐼 Β· 𝑔(𝑀), we interpret 𝐼 and the asymptotic behavior of 𝑔 as measures of the diversity and complexity of the language modeling task at hand, since they characterize isolation and decentralization in the output space Y. Figure 3 provides an example of the C1, Β· Β· Β· , C𝑀 partition on an IBN. The inference path illustrated possesses a typical structure that is key to our analysis, where 𝐸IB edges 6Recall that a partition is a series of non-intersecting subsets whose union equals the full set. constitute the intra-cluster trips, and 𝐸HP edges perform the inter-cluster leaps. Refer to Appendix A for details. Finally, we present the results for the chain-based and treebased information structures. A dataset of chain-based structure is
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e50d78c6-7ed0-40a8-9be3-6aad82d9b5f1
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.2 Analysis Of Two Information Structures provides an example of the C1, Β· Β· Β· , C𝑀 partition on an IBN. The inference path illustrated possesses a typical structure that is key to our analysis, where 𝐸IB edges 6Recall that a partition is a series of non-intersecting subsets whose union equals the full set. constitute the intra-cluster trips, and 𝐸HP edges perform the inter-cluster leaps. Refer to Appendix A for details. Finally, we present the results for the chain-based and treebased information structures. A dataset of chain-based structure is simply modeled as �𝑦𝐴, 𝑦𝐡� pairs sampled independently and uniformly at random from Y2. Our modeling scheme for tree-based datasets is more complicated and can be found in Assumption A.18. We will denote by π’œ the case when the variances of 𝐸IB paths are lower-bounded by a constant, and denote by ℬ the case when the variances of 𝐸IB paths become π‘œ(1). Theorem 5.12 (Mean Inference Distance of Chain-based Dataset). For any chain-based dataset 𝐷 = 𝐷chain, with probability 1 βˆ’ π‘œ(1) (|𝐷| β†’ +∞), its mean inference dis- tance E𝑦1,𝑦2∈Y οΏ½ 𝑑𝐷chain (𝑦1, 𝑦2) οΏ½ satisfies E𝑦1,𝑦2∈Y οΏ½ 𝑑𝐷chain (𝑦1, 𝑦2) οΏ½ |𝐷| 𝛼 log log |𝐷| οΏ½ (F ∼ 𝐼 Β· π‘€βˆ’π›Ό, π’œ) 𝑂 οΏ½ 𝐼·(log |𝐷|)1+𝛼 2+𝛼 |𝐷|βˆ’ 𝛼 𝑂 οΏ½ 𝐼 2 2+𝛼 οΏ½ (F
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
48aa3e14-9dcb-4409-99fa-4b6eb0ebead8
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.2 Analysis Of Two Information Structures 𝑦1, 𝑦2) οΏ½ |𝐷| 𝛼 log log |𝐷| οΏ½ (F ∼ 𝐼 Β· π‘€βˆ’π›Ό, π’œ) 𝑂 οΏ½ 𝐼·(log |𝐷|)1+𝛼 2+𝛼 |𝐷|βˆ’ 𝛼 𝑂 οΏ½ 𝐼 2 2+𝛼 οΏ½ (F ∼ 𝐼 Β· π‘€βˆ’π›Ό, ℬ) = 𝑂 (𝐼 Β· (log |𝐷|)βˆ’π›Ό) (F ∼ 𝐼 Β· (log 𝑀)βˆ’π›Ό , π’œ or ℬ)  $O\left(I\cdot\left(\log|D|\right)^{-\epsilon}\right)$ ($\mathcal{F}\sim I\cdot\left(\log M\right)^{-\epsilon},\mathcal{A}$ or $\mathcal{A}$) $O\left(\mathcal{F}\left(\left|D|^{\frac{1}{2}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{B}$) 
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b56d9f94-f87a-4fb7-9247-814ddb09e1f7
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.2 Analysis Of Two Information Structures right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{B}$)  for some constant 𝛼 > 0, or for all constant πœ– > 0. Note that for F ∼ 𝐼 Β· π‘€βˆ’π›Ό in particular, we have 𝛼 < 1, since the unrealistic extreme case of a long chain as 𝐸IB achieves the asymptotically smallest F of Θ �𝐼 Β· π‘€βˆ’1οΏ½. Theorem 5.13 (Mean Inference Distance of Tree-based Dataset). For any tree-structured dataset 𝐷 = 𝐷tree, with probability 1 βˆ’ π‘œ(1) (|𝐷| β†’ +∞), its mean inference dis- tance E𝑦1,𝑦2∈Y οΏ½ 𝑑𝐷tree(𝑦1, 𝑦2) οΏ½ satisfies E𝑦1,𝑦2∈Y οΏ½ 𝑑𝐷tree(𝑦1, 𝑦2) οΏ½ 𝑂 οΏ½ 𝐼·(log |𝐷|)2𝛼 |𝐷| 𝛼 οΏ½ (F ∼ 𝐼 Β· π‘€βˆ’π›Ό, π’œ) 2+𝛼 (log |𝐷|) 2𝛼 2+𝛼 2+𝛼 |𝐷| 𝛼
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e34ec56f-e73b-41a2-bce9-5596a3780c8b
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.2 Analysis Of Two Information Structures ,𝑦2∈Y οΏ½ 𝑑𝐷tree(𝑦1, 𝑦2) οΏ½ 𝑂 οΏ½ 𝐼·(log |𝐷|)2𝛼 |𝐷| 𝛼 οΏ½ (F ∼ 𝐼 Β· π‘€βˆ’π›Ό, π’œ) 2+𝛼 (log |𝐷|) 2𝛼 2+𝛼 2+𝛼 |𝐷| 𝛼 𝑂 οΏ½ 𝐼 2 οΏ½ (F ∼ 𝐼 Β· π‘€βˆ’π›Ό, ℬ) = 𝑂 (𝐼 Β· (log |𝐷|)βˆ’π›Ό) (F ∼ 𝐼 Β· (log 𝑀)βˆ’π›Ό , π’œ or ℬ)  $O\left(I\cdot\left(\log|D|\right)^{-\epsilon}\right)$ ($\mathcal{F}\sim I\cdot\left(\log M\right)^{-\epsilon},\mathcal{A}$ or $\mathcal{B}$) $O\left(\mathcal{F}\left(\left|D|^{\frac{1}{2}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9aac140f-3ae0-4df9-956b-0e854307dfd3
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.2 Analysis Of Two Information Structures }=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{A}$) $O\left(\mathcal{F}\left(\left|\frac{\left(I|D|\right)^{\frac{1}{2}}}{\left(\log|D|\right)^{\epsilon}}\right|\right)\right)$ ($\mathcal{F}=I\cdot\omega\left(\left(\log M\right)^{-\epsilon}\right),\mathcal{B}$) for some constant 𝛼 > 0, or for all constant πœ– > 0.  Corollary 5.14. If the reward modeling process adopts either the chain-based or the tree-based information structure, and the policy optimization process performs 𝛽-entropyregularized RL, then, when the dataset size |𝐷| β†’ +∞, $$r_{RM}(y_{1})-r_{RM}(y_{2})\stackrel{{P}}{{\rightarrow}}r_{H}(y_{1})-r_{H}(y_{2})$$ $$p_{LM}(y)\stackrel{{d}}{{\rightarrow}}p_{H}(y)$$ _uniformly for all $(y_{1},y_{2})\in\mathcal{Y}^{2}$ and for all $y\in\mathcal{Y}$._ The results of Theorem 5.12 and Theorem 5.13 are summarized in Table 1 . Observe that in case π’œ of F ∼ 𝐼 Β· π‘€βˆ’π›Ό, tree-based information structure outperforms chain-based information structure by a factor of (log |𝐷|)1βˆ’π›Ό (log log |𝐷|)βˆ’1 = πœ”(1), while in case ℬ the the latter information structure outperforms the former by (log
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e7bf9a96-5488-49a3-8066-1930365541f3
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 5.2 Analysis Of Two Information Structures \in\mathcal{Y}$._ The results of Theorem 5.12 and Theorem 5.13 are summarized in Table 1 . Observe that in case π’œ of F ∼ 𝐼 Β· π‘€βˆ’π›Ό, tree-based information structure outperforms chain-based information structure by a factor of (log |𝐷|)1βˆ’π›Ό (log log |𝐷|)βˆ’1 = πœ”(1), while in case ℬ the the latter information structure outperforms the former by (log |𝐷|)2𝛼/(2+𝛼) = πœ”(1). In all other cases, the two have asymptotically equivalent performance. This suggests that the comparative advantage of tree-based information structure is learning in highly diverse contexts (*i.e.*, F ∼ 𝐼 Β· π‘€βˆ’π›Ό) from limited human preference data (*i.e.*, case π’œ). To summarize Section 5, we have modeled both the information structure of the dataset and the inductive bias in RM training, by defining the IBN (Definition 5.3) and related concepts like the mean inference distance (Definition 5.7) and the structural function (Definition 5.10). Using this set of tools, we go on to prove asymptotic bounds on reward generalization in the case of chain-based (Theorem 5.12) and tree-based information structure (Theorem 5.13) respectively, as two case studies. Comparing the two, we find that the latter is better suited for learning in highly diverse contexts from limited human preference data.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fbce5cdc-499b-41ee-bde5-b825f272121e
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 6 Experiments Section 6 answers the following question: On tasks with diverse context and limited data, is tree-based RM more efficient in encoding preferences than chain-based ones?
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8d7626fd-f801-4d56-9afa-28c20d0eb7d2
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 6.1 Experiment Setup Dynamic Tree Generation To enhance the benefits of the tree structure, we propose Dynamic Tree Generation (DTG) for constructing question-answer (QA) datasets and preference datasets. DTG seeks to optimize QA datasets' diversity and stability within a preset maximum tree depth and limited search complexity. Refer to Appendix B.1 for detailed settings including the DTG pseudocode. Tasks Specification We focused on three key tasks: text conversation, dialogue summarization, and mathematical problem-solving. The HH-RLHF dataset (Bai et al., 2022a) informed our text conversation analysis, while the Dialog- Sum dataset (Chen et al., 2021), with its 13,460 dialogue instances and annotated summaries, served for dialogue summarization. For mathematical problem-solving, we utilized the GSM-8K dataset (Cobbe et al., 2021), comprising 8,500 elementary math word problems. SFT Models For the text conversation task, we utilize Alpaca-7B (Taori et al., 2023) based on the 52K conversation dataset since it has been widely recognized in dialogue scenarios. For the other tasks, we fine-tune the pre-trained model LLaMA2-7B (Touvron et al., 2023) based on the respective datasets. These serve as our initial models for further preference data sampling, reward modeling, and finetuning. Preference Labeling For each task we constructed treestructured and chain-structured preference datasets, both composed of roughly 20K preference pairs. For each treebased pair, we concatenate the prompt and the shared portion of answers as context, guiding preference labeling to concentrate on the distinct answer segments. Regarding the chain-based ones, we performed comparisons directly based on prompts and different answers. Evaluation Metrics To verify that the tree-based RM is a better preference encoder than the chain-based one, we finetuned the initial SFT models using two RM-based preference decoders: Proximal Policy Optimization (PPO) (Schulman et al., 2017) and Rejection Sampling Fine-Tuning (RFT) (Touvron et al., 2023). The methodology for evaluating model performance entails a comparative analysis of the models' responses to held-out prompts, utilizing GPT-4 as the judge. For all prompts regarding our GPT-
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
80e7728d-1525-46c0-8fba-e93a31aa89b1
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 6.1 Experiment Setup Evaluation Metrics To verify that the tree-based RM is a better preference encoder than the chain-based one, we finetuned the initial SFT models using two RM-based preference decoders: Proximal Policy Optimization (PPO) (Schulman et al., 2017) and Rejection Sampling Fine-Tuning (RFT) (Touvron et al., 2023). The methodology for evaluating model performance entails a comparative analysis of the models' responses to held-out prompts, utilizing GPT-4 as the judge. For all prompts regarding our GPT-4 preference annotations and evaluation criteria, refer to Appendix B.4. 6.2 Analysis of Experimental Results with PPO Abilities of Preference Encoding The tree-based RM enhances the efficiency of preference encoding. In Table 2, we demonstrate under three key tasks that: (1) Compared to the chain-based scenario, tree-based RM enables initial SFT models to achieve a higher performance improvement; (2) Initial SFT models fine-tuned with tree-based RMs outperforms those chain-based ones in 65% cases on average.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e587bb74-84f6-415c-a27f-6e9bc0985282
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 6.3 Analysis Of Experimental Results With Rft Abilities of Fine-grained Distinction To assess the capability of the tree-based RM in distinguishing fine-grained differences, we conduct RFT on the initial SFT model, Alpaca-7B, using different RMs. We sample 𝑁 responses for each training prompt and select the highest-scoring one (Best of 𝑁, *BoN*) evaluated by corresponding RM, following Bai et al. (2022b). This optimal response is then used for further finetuning of Alpaca-7B. We execute RFT for 𝑁 = 22, 23, Β· Β· Β· , 29. According to Figure 5, the tree-based RM significantly outperforms the chain-based ones in enhancing Alpaca-7B, exhibiting a continuous uptrend as the sample size 𝑁 grows. In contrast, the baseline RM exhibits notable insensitivity to variations in the number of sample answers. Ablation Study on Preference Annotation Our study, using RFT, explores how different proportions of responses in preference data influence the RM's performance. Figure 5 reveals that training RMs on preference data with complete responses leads to superior outcomes. This suggests that finetuning the model's fine-grained distinction abilities can be achieved through adjustments in data generation methods, without altering annotation techniques. Chain vs. SFT Tree (Ours) vs. SFT Tree (Ours) vs. Chain Datasets Win / Lose Win / Lose Win / Lose HH-RLHF 0.72 / 0.28 0.78 / 0.22 0.74 / 0.26 GSM-8K 0.57 / 0.43 0.65 / 0.35 0.63 / 0.37 DialogueSum 0.58 / 0.42 0.66 / 0.34 0.58 / 0.42 Average 0.62 / 0.38 0.70 / 0.30 0.65 / 0.35 Data Scalability To assess the scalability of the tree-based RM with larger preference datasets, we further replicate the RFT experiments on fine-tuned LLaMA-7B with scaling dataset sizes. As Figure 6 indicates, tree-based RM demonstrates an augmented proficiency in distinguishing fine-grained differences from larger datasets, consistent with Gao et al.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3be492ac-247a-4094-a261-38a24092b137
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 6.3 Analysis Of Experimental Results With Rft DialogueSum 0.58 / 0.42 0.66 / 0.34 0.58 / 0.42 Average 0.62 / 0.38 0.70 / 0.30 0.65 / 0.35 Data Scalability To assess the scalability of the tree-based RM with larger preference datasets, we further replicate the RFT experiments on fine-tuned LLaMA-7B with scaling dataset sizes. As Figure 6 indicates, tree-based RM demonstrates an augmented proficiency in distinguishing fine-grained differences from larger datasets, consistent with Gao et al. (2022).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fc47dc99-a654-4e69-a083-c5a87f6672cc
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## 7 Conclusion And Outlook In this study, we conceptualize RLHF as an autoencoding process, and introduce the induced Bayesian network to analyze reward generalization in RLHF from a graph theory perspective. As a case study using this set of tools, we propose a tree-based method for reward modeling, and validate its superiority over the chain-based baseline through both theoretical and experimental means. We expect our methodology to have wider applications in the analysis of reward generalization. Limitations & Future Work The present study has focused on the RLHF paradigm, and has restricted attention to efficiency analysis on information structures. The scope of focus can potentially be extended to cover larger areas in the alignment field, such as the scaling analysis of scalable oversight methods (Ji et al., 2023b). Also, since the IBN method can potentially be utilized to help understand goal misgeneralization (Di Langosco et al., 2022; Shah et al., 2022), further exploration on this front is required, including drawing connections between IBN structures, out-of-distribution contexts, and goals. On the experimental side, while our research has successfully highlighted the encoding efficiency of tree-based RMs, PPO and RFT are not necessarily good decoders for treebased RM. Moving forward, future work could aim to design algorithms with enhanced decoding prowess, better leveraging the encoding potential of RMs with sophisticated information structures.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3a26fb6a-12cc-452e-bc82-1597ab430d0e
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Appendix Table Of Contents | | | A Formulations and Proofs | 13 | |-------------------------------------------------------|--------------------------------------------------------|-----------------------------|------| | A.1 | Formulating Information Structures in Reward Modeling | | 13 | | A.2 Analysis of the Chain-based Information Structure | | 14 | | | A.3 Analysis of the Tree-based Information Structure | | 21 | | | A.4 Analysis Under the High-Density Regime |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e60ee299-0a4f-4529-b565-03cb48de3922
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Appendix Table Of Contents | 21 | | | A.4 Analysis Under the High-Density Regime | | 25 | | | A.5 | Convergence of the Reward Model and the Language Model | | 33 | | B | Experiment Details | 34 | | | B.1 | Dynamic Tree Generation |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
91a2381c-7ca1-41ae-a1fa-eb10dcc424d7
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Appendix Table Of Contents | | | B.1 | Dynamic Tree Generation | | 34 | | B.2 | Complete vs. Incomplete Responses Annotation | | 35 | | B.3 | Hyperparameters | | 36 | | B.4 | GPT-4 Prompts |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
907acc21-96cd-4707-9cc4-684bf07ab26f
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## Appendix Table Of Contents | | B.4 | GPT-4 Prompts | | 38 | | B.5 | Case Study | | 40 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
39aa8e98-c3e8-4806-9575-b2a805420423
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling Definition A.1 (Hypothesis Distribution). Given a response set Y, the hypothesis distribution PHypothesis is a probability distribution over space RY. Here, PHypothesis stands for the distribution of the reward function which can be expressed by the pre-trained language models. **Definition A.2** (Inductive Bias Edge Set).: Given a response set $\mathcal{Y}$ and hypothesis distribution $\mathcal{P}_{\mathrm{Hypothesis}}(\cdot)$, the inductive bias edge set $E_{\mathrm{B}}$ is defined as follows. $$\deg\ \left(\gamma_{i},\gamma_{j},\delta_{i,j}\right)\in E_{\mathrm{B}}\ \Longleftrightarrow\ I_{\mathrm{A-P_{\mathrm{Hypothesis}}}}\left[h(\gamma_{1}),h(\gamma_{2})\right]>C\tag{3}$$ for $\forall\gamma_{i},\gamma_{j},\ i\neq j,\ i,j\in\{1,2,\ldots,|\mathcal{Y}|\}$. $C$ is a constant which provides a lower bound on the mutual information of any edge in $E_{\mathrm{B}}$ over distribution $\mathcal{P}_{\mathrm{Hypothesis}}$. We define the inductive bias edge set 𝐸IB to characterize the relevance of elements in Y before obtaining human rewards. The relevance may stem from factors such as semantic similarity among elements in Y. 𝑖=1, we define 𝐷's *induced Bayesian network* (IBN) 𝐺𝐷(Y, 𝐸𝐷) as a graph with vertex set Y and edge Definition A.3 (Induced Bayesian Network). Given a response set Y and any human preference dataset 𝐷 = οΏ½ (𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, 𝛿𝐷,𝑖) οΏ½|𝐷| set οΏ½
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e74f5c7b-14a8-4faf-90b6-b53757d28ebc
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling 𝐷's *induced Bayesian network* (IBN) 𝐺𝐷(Y, 𝐸𝐷) as a graph with vertex set Y and edge Definition A.3 (Induced Bayesian Network). Given a response set Y and any human preference dataset 𝐷 = οΏ½ (𝑦A 𝐷,𝑖, 𝑦B 𝐷,𝑖, 𝛿𝐷,𝑖) οΏ½|𝐷| set 𝐸𝐷 = 𝐸IB βˆͺ 𝐸𝐷 HP. The human preference edge set 𝐸𝐷 HP is defined as 𝐸𝐷 HP = οΏ½ (𝑒𝐷 𝑗 , 𝑣𝐷 𝑗 , π‘Š 𝐷 𝑗 ) : 𝑗 = 1 *. . .* 2|𝐷| οΏ½ where the 𝑗-th edge connects 𝑒𝐷 𝑗 with 𝑣𝐷 𝑗 and contains information π‘Š 𝐷 𝑗 . Here, (𝑒𝐷 𝑗 , 𝑣𝐷 𝑗 ) =  οΏ½ 𝑦A 𝐷,π‘˜, 𝑦B 𝐷,π‘˜ οΏ½ if 𝑗 = 2π‘˜ βˆ’ 1 οΏ½ 𝑦B 𝐷,π‘˜, 𝑦A 𝐷,π‘˜ οΏ½ if 𝑗 = 2π‘˜  and π‘Š 𝐷 𝑗 (Β·|Β·) = 𝑝𝑅𝐷 𝑣𝐷 𝑗 |𝑅𝐷 𝑒𝐷 𝑗 (Β·|Β·) is a conditional distribution determined by 𝛿𝐷,⌈ π‘—βŒ‰. Here, specifying the
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
abe3287e-eab3-4412-b810-e84ff6e86846
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling 𝑦A 𝐷,π‘˜ οΏ½ if 𝑗 = 2π‘˜  and π‘Š 𝐷 𝑗 (Β·|Β·) = 𝑝𝑅𝐷 𝑣𝐷 𝑗 |𝑅𝐷 𝑒𝐷 𝑗 (Β·|Β·) is a conditional distribution determined by 𝛿𝐷,⌈ π‘—βŒ‰. Here, specifying the conditional distributions instead of joint distributions avoids issues caused by the shift-invariance of reward scores. In the induced Bayesian network that we define, the edges between any two points are bidirectional. In other words, when defining an edge from 𝑦1 to 𝑦2, we also define an edge from 𝑦2 to 𝑦1, and the meanings of the weights on these two edges are equivalent. Therefore, in the subsequent sections, for the sake of simplification, we generally consider the induced Bayesian network as an undirected graph without loss of generality. Assumption A.4 (The Information of an Edge Follows a Logistic Distribution). Given any dataset 𝐷 and induced Bayesian network 𝐺𝐷(Y, 𝐸𝐷), we assume that whether the edge from 𝑦1 to 𝑦2 belongs to 𝐸IB or 𝐸𝐷 HP, the information π‘Š 𝐷 = 𝑝𝑅𝐷 𝑦2 |𝑅𝐷 𝑦1 (Β·|Β·) is the probability density function of a logistic distribution, which means 𝛽(𝑦1,𝑦2) οΏ½ if (𝑦1, 𝑦2) ∈ 𝐸IB Logistic οΏ½ π‘Ÿ, 1 𝛽HP 𝑅𝐷 𝑦2|𝑅𝐷 𝑦1 = π‘Ÿ ∼  Logistic οΏ½ π‘Ÿ + 𝛿, 1 οΏ½ if (οΏ½
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ccdbda47-8d8d-401f-8988-fa09962ba2bf
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling Β·) is the probability density function of a logistic distribution, which means 𝛽(𝑦1,𝑦2) οΏ½ if (𝑦1, 𝑦2) ∈ 𝐸IB Logistic οΏ½ π‘Ÿ, 1 𝛽HP 𝑅𝐷 𝑦2|𝑅𝐷 𝑦1 = π‘Ÿ ∼  Logistic οΏ½ π‘Ÿ + 𝛿, 1 οΏ½ if (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP (4)  where 𝛽(𝑦1,𝑦2) is a constant related to (𝑦1, 𝑦2), 𝛽HP is a constant related to 𝐸𝐷 HP and 𝛿 is related to (𝑦1, 𝑦2), which represents human preference between 𝑦1 and 𝑦2. Here we assume that human preferences exhibit a certain degree of stability, which means that for any (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP, 𝛽HP has upper and lower bounds. Thus, without loss of generality, we assume that for any (𝑦1, 𝑦2) ∈ 𝐸𝐷 HP, constant 𝛽HP is independent of 𝐸𝐷 HP. Definition A.5 (Inference Path). Given any dataset 𝐷 and 𝑦1 ∈ Y, 𝑦2 ∈ Y, we call a sequence of edges 𝑆 = {(𝑠𝑖, 𝑑𝑖, π‘Šπ‘–) ∈ 𝐸𝐷 : 𝑖 = 1 . . . π‘˜} an inference path from 𝑦1 to 𝑦2 if 𝑦1 = 𝑠1, π‘‘π‘˜ = 𝑦2, and 𝑠𝑖
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fdb4c0e3-755b-4577-8932-5c3f844a61d8
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling 𝑦1 ∈ Y, 𝑦2 ∈ Y, we call a sequence of edges 𝑆 = {(𝑠𝑖, 𝑑𝑖, π‘Šπ‘–) ∈ 𝐸𝐷 : 𝑖 = 1 . . . π‘˜} an inference path from 𝑦1 to 𝑦2 if 𝑦1 = 𝑠1, π‘‘π‘˜ = 𝑦2, and 𝑠𝑖 = 𝑑𝑖+1, βˆ€π‘– < π‘˜. Assuming the independence between 𝑅𝐷 𝑠𝑖 and 𝑅𝐷 𝑑𝑖+1 conditional on 𝑅𝐷 𝑠𝑖+1, one can uniquely determine the conditional distribution 𝑝𝑅𝑦2 |𝑅𝑦1 (Β·|Β·) based on {π‘Šπ‘– : 𝑖 = 1 . . . π‘˜}, which we denote with π‘Šπ‘†(Β·|Β·). There could be multiple possible inference paths between any pair of vertices. To choose the best one among them, we need to define the inference variance of any inference path. Definition A.6 (Inference Distance). Given any inference path 𝑆 in 𝐺𝐷 going from 𝑦1 ∈ Y to 𝑦2 ∈ Y, its inference variance IV[𝑆] is defined as Var οΏ½ 𝑅𝐷 𝑦2 ��𝑅𝐷 𝑦1 οΏ½. The optimal inference path in 𝐺𝐷 between 𝑦1 and 𝑦2, denoted by 𝑆𝐷 opt(𝑦1, 𝑦2), is the inference path with the smallest inference variance. The inference distance 𝑑𝐷(𝑦1, 𝑦2) between 𝑦1 and 𝑦2 is defined as IV[𝑆�
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
697fcd5e-e902-42dc-9f22-1d7a9eb638a5
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling οΏ½οΏ½ 𝑦2 ��𝑅𝐷 𝑦1 οΏ½. The optimal inference path in 𝐺𝐷 between 𝑦1 and 𝑦2, denoted by 𝑆𝐷 opt(𝑦1, 𝑦2), is the inference path with the smallest inference variance. The inference distance 𝑑𝐷(𝑦1, 𝑦2) between 𝑦1 and 𝑦2 is defined as IV[𝑆𝐷 opt(𝑦1, 𝑦2)]. Similarly, we define 𝑑IB(𝑦1, 𝑦2) to be the minimum inference variance of paths leading from 𝑦1 to 𝑦2 that only traverse edges in 𝐸IB. Here, the inference variance IV[𝑆] and the inference distance 𝑑𝐷(𝑦1, 𝑦2) measures the uncertainty over the value of 𝑅𝐷 𝑦2 if one starts from the value of 𝑅𝐷 𝑦1 and follows the inference path 𝑆. They reflect our ability to determine the relative human preference between 𝑦1 and 𝑦2 based on information in 𝐷. Definition A.7 (Mean Inference Distance). The mean inference distance of a human preference dataset 𝐷 is defined by E𝑦1,𝑦2∈Y οΏ½ 𝑑𝐷(𝑦1, 𝑦2) οΏ½ , where 𝑦1, 𝑦2 are independently and equiprobably drawn. Remark A.8 (RM Inference and IBN Inference are Analogous). When the training of the RM on 𝐷 has converged, every sample in 𝐷 (i.e., every edge in 𝐸𝐷 HP) serves as a soft constraint on 𝐢's relative preference between the two compared passages, since any sample preference that is violated will create gradients that pull away from convergence. Therefore, the RM policy
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0fcddcc9-33d3-4e55-8722-9db58db455ef
# Rethinking Information Structures In Rlhf: Reward Generalization From A Graph Theory Perspective ## A Formulations And Proofs A.1 Formulating Information Structures In Reward Modeling , where 𝑦1, 𝑦2 are independently and equiprobably drawn. Remark A.8 (RM Inference and IBN Inference are Analogous). When the training of the RM on 𝐷 has converged, every sample in 𝐷 (i.e., every edge in 𝐸𝐷 HP) serves as a soft constraint on 𝐢's relative preference between the two compared passages, since any sample preference that is violated will create gradients that pull away from convergence. Therefore, the RM policy that is converged upon represents the joint satisfaction of these soft constraints, which enables the RM to perform the equivalent of multi-hop inference on 𝐺𝐷. Thus, we consider an RM trained on dataset 𝐷 to be approximately equivalent to an optimal inference machine on the IBN 𝐺𝐷, which allows us to use the mean inference distance as the quality criteria for datasets. From now on, we will use the mean inference distance as the criteria for evaluating a dataset's quality. Also note that the inference variance focuses on the relative preference between two vertices, which avoids the problem of shift-invariant reward ratings. **Assumption A.9** (Conditional Independence).: Given any induced Bayesian network $G^{D}$ and any $y_{1},y_{2}\in\mathcal{J}$, the optimal inference path from $y_{1}$ to $y_{2}$ is $S^{D}_{\text{op}}(y_{1},y_{2})$ satisfies the following properties. $$p\left(R^{D}_{y_{1}},R^{D}_{y_{2}}\big{|}R^{D}_{y_{1}}\right)=p\left(R^{D}_{y_{1}}\big{|}R^{D}_{y_{1}}\right)\cdot p\left(R^{D}_{y_{2}}\big{|}R^{D}_{x_{2}}\right)\tag{5}$$ for $\forall s_{i}$, where $s_{i}$ is a node in optimal inference path $S^{D}_{\text{
{ "creation_datetime": "2024-03-04", "file_name": "2402.10184v1.md", "file_path": "paper_data/2402.10184v1.md", "file_size": 107340, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }