doc_id
stringlengths 36
36
| contents
stringlengths 22
3.25k
| metadata
dict |
---|---|---|
57ebcf92-3b4a-42a3-8d9d-0bfd51eb4764 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Boxnet2 Best Prompt For Gpt-4 Score = 0.42 (Gpt-4 As The Testing Llm)
agents efficiently by exploring different combinations and managing resources to maximize the number of
boxes lifted per step. Ensure that agents are not duplicated within the same action plan.
- Prioritize boxes based on the number of previous attempts, the volume of the box, and the capacities of available
agents. Attempt untried boxes first, followed by those that have been attempted fewer times.
- Consider complex combinations of agents for heavier boxes and be prepared to incrementally add more agents if
simpler combinations fail. Provide examples of how to form these combinations.
- In situations where no available agents can lift a box due to insufficient capacity, adjust your plan to include
additional agents or explore alternative strategies, such as reevaluating the order of box lifting or temporarily setting aside boxes that cannot be lifted until more agents are available.
- Correct the example action plans to reflect the proper JSON format and constraints. Show how to adjust the action
plan based on the feedback received, including how to add additional agents or change agent assignments.
By following these guidelines and structuring your action plans as demonstrated, you will optimize the lifting process and achieve our goal of lifting all boxes in the fewest steps possible. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
606beadd-a64d-4c48-b3b0-d7b9df6f2a7e | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Human Prompt Score = 0.0 (Gpt-3.5-Turbo-16K-0613 As The Testing Llm) Score = 0.0 (Gpt-3.5-Turbo-0301) Score = 0.16 (Gpt-4 As The Testing Llm)
You are a central planner directing mobile transporting agents in a warehouse to pick boxes and place them into the target place.
Agent can only walk on horizontal tracks and enter specific regions for picking up boxes. Each agent can only hold one box each time. Each agent can do the actions:
1) When the robot is on the track, it can pick up one box whose location is 0.5 away from the robot (either location difference in x or y.). For example, "pick box 1.5 1.0"Note that the agent can only pick the box near its location, their row locations should have difference of 0.5, and column difference should be 0.0, e.g., agent0 is in track 1 and column 3 and can do "pick box 1.5 3.0" or "pick box 0.5 3.0".
2) When the robot is on the track, it can move its position with distance 1 either to the left or to the right. For example, "move left", "move right" 3) When the robot is on the target, it can move its position to the track to get onto the track and carry the boxes. For example, "move to track 1"
4) When the robot is on the track, it can move its position to the target to pour the box into the target. For example,
"move to target"Note that robots without box on it can also move to target to avoid being obstacle of other robots.
All robots moving to the target will pour their boxes. Hence, the final goal is to pour all the boxes into the target.
Multiple robots can locate in target in the same time, but cannot be in the same track position in the same time.
The warehouse playground has left side column 0 and right side, if the agent column is at these two sides, they can only move right or move left but not both directions. If the agent in the target, it can move to the left side of all the tracks If the agent is in the left side of the track, it can move to the target and drop the box.
Your task is to assign each agent the task in the next step. After each step, environments provide updates for each agent and the state of left boxes. Your job is to coordinate the agents optimally to minimize the step number. [Do remember that each position(track and column locations) can only accommodate one | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c2d4d5df-4eea-4756-9623-fb7b36db4651 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Human Prompt Score = 0.0 (Gpt-3.5-Turbo-16K-0613 As The Testing Llm) Score = 0.0 (Gpt-3.5-Turbo-0301) Score = 0.16 (Gpt-4 As The Testing Llm)
the agent column is at these two sides, they can only move right or move left but not both directions. If the agent in the target, it can move to the left side of all the tracks If the agent is in the left side of the track, it can move to the target and drop the box.
Your task is to assign each agent the task in the next step. After each step, environments provide updates for each agent and the state of left boxes. Your job is to coordinate the agents optimally to minimize the step number. [Do remember that each position(track and column locations) can only accommodate one agent each step! Hence, you need to avoid the collision with other agents. Actions like move two agents into the same position at the same time or move one agent into the position that already has one agent are not allowed!]
Specify your action plan in this format: {"agent0":"move left", "agent1":"move to track 1", "agent2":"pick box 1.5 1.0", "agent3":"move to target", "agent4":"move right", "agent5":"pick box 1.5 3.0"}. Include an agent only if it has actions in the next step. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fe20a3af-42d4-4f99-b711-74846c6aa4e1 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Best Prompt For Gpt-4 Score = 0.512 (Gpt-4 As The Testing Llm)
You are a central planner tasked with the strategic coordination of autonomous mobile agents within a warehouse environment. Your primary goal is to orchestrate the movement of these agents to efficiently transport boxes from their initial locations to a designated target area. Each agent can carry only one box at a time. To successfully accomplish this task, agents must adhere to a set of rules and constraints that govern their actions.
The agents can perform the following actions, under specific conditions:
1) Pick Up Box: An agent can pick up a box if it is directly adjacent to it on the track, specifically 0.5 units away either in the x or y direction. For instance, an agent positioned at track 1, column 3, can execute "pick box 1.5 3.0" or "pick box 0.5 3.0" if the box is present and the agent is not already carrying a box. 2) Move Horizontally: An agent on the track can move horizontally by one unit either to the left or to the right, unless it is at the extremities of the tracks (column 0 or the last column), where it can only move away from the extremity. Use the commands "move left" or "move right" to direct this action. 3) Move to Track: An agent in the target area can move to the leftmost side of any track. The command "move to track X" positions the agent at the leftmost point of track X.
4) Move to Target: An agent carrying a box can move to the target area to deposit the box using "move to target"
when the agent is at the leftmost side of the track.
The following constraints must be observed:
- An agent not carrying a box may move to the target area to prevent obstructing the path of other agents. - Multiple agents can occupy the target area simultaneously, but they must not be positioned on the same track and
column at the same time.
- Agents at the extremities of the tracks are restricted to moving in one direction only (to the right from column 0 and
to the left from the last column).
- Collision avoidance is mandatory: no two agents are allowed to occupy the same track and column position at the
same time.
Your responsibility is to devise a plan for the next move of each agent with the aim of minimizing the total number of steps required. After each move, you will receive updated information about the positions of each agent and the | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e1b79732-039b-4f9c-97bd-ce2dc37a8703 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Best Prompt For Gpt-4 Score = 0.512 (Gpt-4 As The Testing Llm)
target area simultaneously, but they must not be positioned on the same track and
column at the same time.
- Agents at the extremities of the tracks are restricted to moving in one direction only (to the right from column 0 and
to the left from the last column).
- Collision avoidance is mandatory: no two agents are allowed to occupy the same track and column position at the
same time.
Your responsibility is to devise a plan for the next move of each agent with the aim of minimizing the total number of steps required. After each move, you will receive updated information about the positions of each agent and the locations of the remaining boxes. Use this information to refine your strategy and prevent collisions.
Action plans must be formatted as follows: {"agent0":"move left", "agent1":"move to track 1", "agent2":"pick box 1.5 1.0", "agent3":"move to target", "agent4":"move right", "agent5":"pick box 1.5 3.0"}. Include an agent in your action plan only if it needs to take action in the next step.
The overarching objective is to transport all boxes to the target area with maximum efficiency, in compliance with the established rules and constraints. Your planning must be reflective of the current warehouse conditions, including the agents' positions, whether they are carrying a box, and the box locations, to ensure seamless operations. Use feedback from the environment to adjust future actions, avoiding repetition of actions that were previously indicated as not doable, and ensure that the action plan is precise and includes only necessary agent movements.
Gridworld1 Human prompt Score = 0.23 (GPT-3.5-turbo-16k-0613 as the testing LLM) Score = 0.25 (GPT-3.5-turbo-0301) Score = 0.73 (GPT-4 as the testing LLM)
You (the robot) are in a grid-like field to pick up all the goals in order and avoid all the obstacles. Each goal and obstacle is assigned to a 1x1 square.
The robot can move in four directions: up, down, left, and right. The robot can move to a square only if it is not occupied by an obstacle. If the robot is in the same square with a goal, you can pick up the goal and the square becomes empty. [(1) Note that the | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0e051a5f-2d38-4e2b-af54-06817504fc3b | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Best Prompt For Gpt-4 Score = 0.512 (Gpt-4 As The Testing Llm)
0301) Score = 0.73 (GPT-4 as the testing LLM)
You (the robot) are in a grid-like field to pick up all the goals in order and avoid all the obstacles. Each goal and obstacle is assigned to a 1x1 square.
The robot can move in four directions: up, down, left, and right. The robot can move to a square only if it is not occupied by an obstacle. If the robot is in the same square with a goal, you can pick up the goal and the square becomes empty. [(1) Note that the coordinate system is different from the Cartesian coordinate system. The origin is at the top left corner. The coordinate representation is [row number, column number]. For example, if you are in the square [3,2], Move up leads to [2,2], Move down leads to [4,2], Move left leads to [3,1], and Move right leads to [3,3].
(2) In your response, you can only use {} to specify your action. For example, {Move up}. Do not add any other words or symbols in your response. Also use {} only once in your whole response so that we know what is next action without ambiguity.] Please learn from previous steps. Not purely repeat the actions but learn why the state changes or remains in a dead loop. Avoid being stuck in action loops.
Do remember do not move to the square occupied by an obstacle! Do remember do not move out of the field! Plan your action in each step based on your relative distance to goals.
All the possible actions are: Move up, Move down, Move left, Move right, Pick goal Specify your action in this format at the end of your answer: {Move up}, {Move down}, {Move left}, {Move right}, {Pick goal}.
You (the robot) are tasked with navigating a grid-like field to sequentially collect all goals while avoiding obstacles.
Each goal and obstacle occupies a distinct 1x1 square on the grid. Your current position is known, and you must use this information to make strategic decisions that adhere to the following optimized, clarified, and refined rules:
1. **Immediate Goal Collection**: If a goal is located on your current square, immediately collect it with the action
{Pick goal} before considering any movement.
2. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
82ae8070-b5e1-4374-acc4-53348a6240dd | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Best Prompt For Gpt-4 Score = 0.512 (Gpt-4 As The Testing Llm)
}, {Move left}, {Move right}, {Pick goal}.
You (the robot) are tasked with navigating a grid-like field to sequentially collect all goals while avoiding obstacles.
Each goal and obstacle occupies a distinct 1x1 square on the grid. Your current position is known, and you must use this information to make strategic decisions that adhere to the following optimized, clarified, and refined rules:
1. **Immediate Goal Collection**: If a goal is located on your current square, immediately collect it with the action
{Pick goal} before considering any movement.
2. **Enhanced Obstacle and Boundary Avoidance**: Before planning a move, confirm that the intended path is free of obstacles and within the grid limits. The grid's origin is at the top left corner, with coordinates [row number, column number]. Do not attempt to move into a square with an obstacle or beyond the grid boundaries. 3. **Strategic Goal Pursuit**: Identify the location of the nearest goal using the most efficient path calculation and plan a path towards it, circumventing any obstacles as necessary. Your moves should be calculated to reduce the distance to the nearest goal unless an obstacle dictates a detour.
4. **Dynamic Strategy Adaptation**: Reflect on the outcomes of previous actions to enhance your decision-making process. Avoid actions that have previously led to collisions or have not progressed you towards a goal. Adjust your strategy to be more effective. 5. **Prioritization of Actions**: The collection of goals is your primary mission. Move only if it is strategic for goal acquisition or essential for obstacle circumvention.
6. **Continuous State Assessment and Adjustment**: Consistently verify and update your current state after each action. This includes your position, the positions of goals, and the locations of obstacles to ensure your next action is based on the most current information.
7. **Feedback-Driven Action Refinement**: Integrate feedback from the environment and your previous actions to refine your approach. If an action was ineffective or incorrect, adopt a different strategy that complies with the established rules. 8. **Explicit and Valid Action Execution**: If an invalid action is attempted, acknowledge the mistake and select a valid and strategic action instead.
9. **Precise Obstacle Mapping**: Maintain a clear and updated understanding of obstacle positions relative to your current location to avoid any prohibited | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3f019c73-3a4c-444e-ac4e-de6f8354b312 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Best Prompt For Gpt-4 Score = 0.512 (Gpt-4 As The Testing Llm)
obstacles to ensure your next action is based on the most current information.
7. **Feedback-Driven Action Refinement**: Integrate feedback from the environment and your previous actions to refine your approach. If an action was ineffective or incorrect, adopt a different strategy that complies with the established rules. 8. **Explicit and Valid Action Execution**: If an invalid action is attempted, acknowledge the mistake and select a valid and strategic action instead.
9. **Precise Obstacle Mapping**: Maintain a clear and updated understanding of obstacle positions relative to your current location to avoid any prohibited moves.
10. **Boundary Awareness and Compliance**: Always be aware of the grid boundaries to prevent any attempts to move outside the grid.
11. **Error Identification and Strategic Correction**: Recognize any errors in action promptly and correct your course of action to align with the goal-oriented strategy.
12. **Effective Feedback Application**: Utilize feedback from the environment to continuously improve your actions, particularly after an unsuccessful or ineffective move.
13. **Nearest Goal Prioritization**: Always determine the nearest goal's location from your current position before planning your next move. This ensures that your actions are optimized for goal collection efficiency.
14. **State Verification Before Action**: Before planning your next move, verify your current state, including the presence of goals and obstacles, to ensure that your next action is appropriate and strategic.
15. **Avoidance of Ineffective Repetition**: Use feedback from the environment to avoid repeating actions that have been proven ineffective or incorrect. Learn from past outcomes to make better decisions.
16. **Clear Movement Decision Criteria**: When multiple movement options are available, choose the direction that brings you closest to the nearest goal without violating obstacle and boundary rules. If equidistant, prioritize moves in the following order: up, left, down, right.
17. **Loop Prevention and Progress Assessment**: If you find yourself oscillating between two or more squares without making progress, reassess the situation and choose a different path to break the loop. After each move, assess whether you are closer to the nearest goal to ensure progress is being made.
18. **Action Execution Confirmation**: After performing an action, confirm its outcome to ensure it was executed as intended and adjust your strategy accordingly.
19. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
aad257b5-1d1e-4a05-a233-116dbecaf591 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Best Prompt For Gpt-4 Score = 0.512 (Gpt-4 As The Testing Llm)
goal without violating obstacle and boundary rules. If equidistant, prioritize moves in the following order: up, left, down, right.
17. **Loop Prevention and Progress Assessment**: If you find yourself oscillating between two or more squares without making progress, reassess the situation and choose a different path to break the loop. After each move, assess whether you are closer to the nearest goal to ensure progress is being made.
18. **Action Execution Confirmation**: After performing an action, confirm its outcome to ensure it was executed as intended and adjust your strategy accordingly.
19. **Proactive Error Prevention and Strategic Decision Making**: Before executing any action, proactively consider potential errors and choose the action that has the highest likelihood of success based on the current state and established rules. Make strategic decisions that prioritize goal collection and efficient navigation. 20. **Feedback Mechanism Accuracy**: Ensure that the feedback mechanism is correctly interpreting the robot's actions, particularly when collecting goals. If the feedback indicates an error in goal collection when the action was correct, the mechanism should be adjusted to recognize the successful collection. 21. **Boundary and Obstacle Confirmation**: Before each move, perform a boundary and obstacle check to confirm that the intended path is valid. This check must be accurate to prevent invalid moves that violate the rules. 22. **Goal Collection Confirmation**: When on a square with a goal, confirm the collection of the goal before any movement is considered. This action must be prioritized over all others to align with the mission's primary objective. 23. **Error Recognition and Recovery**: The robot must be capable of recognizing when an error has occurred, such as attempting to move into an obstacle or outside the grid, and take immediate corrective action. 24. **Comprehensive State Verification**: Continuously verify the robot's current state, including its position, the positions of goals, and the locations of obstacles, before planning and executing the next move. 25. **Valid Action Assurance**: Prior to action execution, ensure that the chosen action is valid and possible within the current state of the environment. 26. **Intelligent Directional Decision**: When the robot is equidistant from a goal or has multiple paths to choose from, it should consider the history of its moves and environmental feedback to select a path that is most likely to be successful, avoiding previously unsuccessful paths.
27. **Goal Proximity Alert**: The robot | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dbdbdf8b-6b8e-4044-89e2-0763c651d4b2 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Warehouse Best Prompt For Gpt-4 Score = 0.512 (Gpt-4 As The Testing Llm)
current state, including its position, the positions of goals, and the locations of obstacles, before planning and executing the next move. 25. **Valid Action Assurance**: Prior to action execution, ensure that the chosen action is valid and possible within the current state of the environment. 26. **Intelligent Directional Decision**: When the robot is equidistant from a goal or has multiple paths to choose from, it should consider the history of its moves and environmental feedback to select a path that is most likely to be successful, avoiding previously unsuccessful paths.
27. **Goal Proximity Alert**: The robot should have an internal alert system that triggers when it is adjacent to a goal, prompting it to prioritize the goal's collection before any other action. 28. **Consistent Path Following**: When the robot has initiated a successful path towards a goal, it should continue on that path unless an obstacle or boundary requires a change in direction.
Execute only one action per response in the specified format to maintain clarity and avoid ambiguity: {Move up}, {Move down}, {Move left}, {Move right}, {Pick goal}. Your next action should be clearly indicated using this format. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
671e20a0-096c-4c8c-a1c0-d92caf1f9eb2 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld1 Best Prompt For Gpt-4 Score = 0.86 (Gpt-4 As The Testing Llm)
You (the robot) are tasked with navigating a grid-like field to collect all goals in sequence while avoiding obstacles.
Each goal and obstacle is located on a separate 1x1 square within the grid.
Your capabilities include moving in the four cardinal directions: up, down, left, and right. You are only permitted to move onto a square if it is not occupied by an obstacle.
When you reach a square that contains a goal, you must pick up the goal, which will then clear the square.
Adhere to these optimized guidelines for navigation and task execution:
1. The grid's origin is at the top left corner, with positions denoted by [row number, column number]. For example, from [3,2], Move up takes you to [2,2], Move down to [4,2], Move left to [3,1], and Move right to [3,3]. 2. Clearly communicate your intended action using braces , and limit your response to one action for clarity, such as: Move up. 3. Use the history of your actions and the feedback received to avoid repeating ineffective moves and to prevent looping behavior. Learn from past outcomes to improve your decision-making process.
4. Before each move, check for obstacles in all four adjacent squares. Never attempt to move into a square with an obstacle. 5. Stay within the grid's boundaries to avoid moving off the field. 6. Prioritize goals based on proximity, and plan the most efficient route to the nearest goal, taking into account the positions of all goals and obstacles. Use a heuristic such as the Manhattan distance to determine the closest goal.
7. Once you have chosen a direction that brings you closer to a goal, continue moving in that direction until you reach the goal, encounter an obstacle, or would move outside the grid's boundaries. 8. When you reach a goal's location, immediately pick up the goal with the action Pick goal. 9. Continuously update your knowledge of the grid's current state, including the locations of goals, obstacles, and your own position, to avoid repeating ineffective actions or entering into loops.
10. After each move, dynamically adjust your path based on new information and feedback to ensure the most efficient completion of the task.
11. If a chosen path is blocked by an obstacle or leads to a dead end, backtrack and select an alternative route that brings you closer to | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fd106c90-54cc-4d6d-b314-088879db3f7b | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld1 Best Prompt For Gpt-4 Score = 0.86 (Gpt-4 As The Testing Llm)
would move outside the grid's boundaries. 8. When you reach a goal's location, immediately pick up the goal with the action Pick goal. 9. Continuously update your knowledge of the grid's current state, including the locations of goals, obstacles, and your own position, to avoid repeating ineffective actions or entering into loops.
10. After each move, dynamically adjust your path based on new information and feedback to ensure the most efficient completion of the task.
11. If a chosen path is blocked by an obstacle or leads to a dead end, backtrack and select an alternative route that brings you closer to the nearest goal without revisiting recently occupied squares unless it is part of an efficient path to a goal.
12. If you find yourself repeating the same action without progress, reassess your strategy and consider all remaining goals and obstacles to find a new efficient path.
13. Implement a strategy to recognize when you are not making progress towards a goal, such as visiting the same square multiple times without collecting a goal, and then reassess your path.
Your ultimate goal is to collect all goals in the most efficient manner possible, circumventing obstacles and staying within the grid's limits. Implement these optimized guidelines to dynamically refine your path and ensure successful task completion.
The permissible actions are: {Move up}, {Move down}, {Move left}, {Move right}, {Pick goal}.
Gridworld2 Human prompt Score = 0.036 (GPT-3.5-turbo-16k-0613 as the testing LLM) Score = 0.021 (GPT-3.5-turbo-0301) Score = 0.26 (GPT-4 as the testing LLM)
You (the robot) are in a grid-like field to pick up all the goals in order and avoid all the obstacles. Each goal and obstacle is assigned to a 1x1 square.
The robot can move in four directions: up, down, left, and right. The robot can move to a square only if it is not occupied by an obstacle. If the robot is in the same square with a goal, you can pick up the goal and the square becomes empty. However, you should pick the goals in order, from 0 to larger. If the goal in the current square is not the next goal, you can not pick it up. You should move to other squares to find the | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4f78e2ca-4465-495f-821c-b9cdcca96224 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld1 Best Prompt For Gpt-4 Score = 0.86 (Gpt-4 As The Testing Llm)
the goals in order and avoid all the obstacles. Each goal and obstacle is assigned to a 1x1 square.
The robot can move in four directions: up, down, left, and right. The robot can move to a square only if it is not occupied by an obstacle. If the robot is in the same square with a goal, you can pick up the goal and the square becomes empty. However, you should pick the goals in order, from 0 to larger. If the goal in the current square is not the next goal, you can not pick it up. You should move to other squares to find the next goal. [(1) Note that the coordinate system is different from the Cartesian coordinate system. The origin is at the top left corner. The coordinate representation is [row number, column number].
For example, if you are in the square [3,2], Move up leads to [2,2], Move down leads to [4,2], Move left leads to [3,1], and Move right leads to [3,3].
(2) The robot should pick up all the goals in order, index from 0 to larger. For example, if there are 3 goals, the robot should pick up the goal 0 first, then the goal 1, and finally the goal 2.
(3) In your response, you can only use {} to specify your action. For example, Move up. Do not add any other words or symbols in your response. Also use {} only once in your whole response so that we know what is next action without ambiguity.]
Please learn from previous steps. Not purely repeat the actions but learn why the state changes or remains in a dead loop. Avoid being stuck in action loops. Do remember do not move to the square occupied by an obstacle! Do remember do not move out of the field! Plan your action in each step based on your relative distance to goals.
All the possible actions are: Move up, Move down, Move left, Move right, Pick goal Specify your action in this format at the end of your answer: {Move up}, {Move down}, {Move left}, {Move right}, {Pick goal}.
You (the robot) are tasked with navigating a grid-like field to collect a series of numbered goals in the correct numerical sequence, from goal 0 to the highest-numbered goal, while avoiding obstacles. Each goal and obstacle occupies a distinct 1x | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5049850a-72b2-493a-856a-209d522758b3 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld1 Best Prompt For Gpt-4 Score = 0.86 (Gpt-4 As The Testing Llm)
of the field! Plan your action in each step based on your relative distance to goals.
All the possible actions are: Move up, Move down, Move left, Move right, Pick goal Specify your action in this format at the end of your answer: {Move up}, {Move down}, {Move left}, {Move right}, {Pick goal}.
You (the robot) are tasked with navigating a grid-like field to collect a series of numbered goals in the correct numerical sequence, from goal 0 to the highest-numbered goal, while avoiding obstacles. Each goal and obstacle occupies a distinct 1x1 square on the grid. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
16a136a8-51ae-4dcb-bf80-7d76da6f55a4 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Objective: - Collect All Goals In Numerical Order Without Violating Any Movement Or Collection Rules.
Movement Rules:
- You may move one square at a time in one of four directions: up, down, left, or right. - You must not move into squares with obstacles or beyond the grid boundaries.
Goal Collection Rules:
- You must pick up a goal only if it is the next in sequence and you are on the same square as that goal. - Once a goal is picked up, the square it occupied becomes traversable. - If you encounter a goal that is not the next in sequence, you cannot pick it up and must navigate to find the correct goal.
Coordinate System:
- The grid's origin is at the top left corner, with coordinates given as [row number, column number]. - Moving up decreases the row number, moving down increases the row number, moving left decreases the column
number, and moving right increases the column number.
Action Specification:
- Specify your action using only one of the following commands within curly braces: {Move up}, {Move down},
{Move left}, {Move right}, {Pick goal}.
- Do not include any additional words, symbols, or multiple actions within the braces.
Adaptive Learning and Error Correction:
- Learn from the outcome of each action to avoid ineffective or rule-violating moves.
- Continuously update your strategy based on your current position, the positions of remaining goals, and the
locations of obstacles.
- Avoid repeating a sequence of moves that does not change your state or bring you closer to the next goal. - If an action does not progress towards the goal or violates the rules, reassess and choose a different action.
Action Planning and Efficiency:
- Before each move, verify your current position and assess the most efficient path to the next goal, avoiding obstacles and grid edges.
- If you are on the same square as the next goal, the only valid action is {Pick goal}.
- If the next goal is not directly accessible, plan an alternative route that brings you closer to the goal without violating movement rules.
- Prioritize picking up the goal over moving if you are on the goal square.
State Verification:
- Before suggesting an action, confirm your current position and the location of the next goal to ensure the action is valid and efficient.
Your ultimate goal is to collect all goals in the correct sequence as efficiently as possible, ad | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
67976ad1-c7e4-4457-b938-b255c6534bb8 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Objective: - Collect All Goals In Numerical Order Without Violating Any Movement Or Collection Rules.
- If you are on the same square as the next goal, the only valid action is {Pick goal}.
- If the next goal is not directly accessible, plan an alternative route that brings you closer to the goal without violating movement rules.
- Prioritize picking up the goal over moving if you are on the goal square.
State Verification:
- Before suggesting an action, confirm your current position and the location of the next goal to ensure the action is valid and efficient.
Your ultimate goal is to collect all goals in the correct sequence as efficiently as possible, adhering strictly to the movement and collection rules. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f9fe0c1e-8d14-4245-b92c-a7ce1f3664e1 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld2 Best Prompt For Gpt-4 Score = 0.60 (Gpt-4 As The Testing Llm)
You (the robot) are tasked with navigating a grid-like field to sequentially collect goals, labeled from goal 0 to the highest-numbered goal, while avoiding obstacles. Each goal and obstacle occupies a distinct 1x1 square on the grid.
Your movements are limited to four directions: up, down, left, and right. You may only move onto a square if it is not occupied by an obstacle.
**Critical Rule for Goal Collection**: You must collect goals in strict numerical order, starting with goal 0. Before suggesting {Pick goal}, you must perform a state verification checkpoint. This involves confirming that the goal is the next in the numerical sequence and that you are on the correct square.
Adhere to these optimized rules for successful navigation and goal collection:
1. **Sequential Goal Collection**: Before suggesting {Pick goal}, explicitly state the number of the goal you are attempting to collect and confirm it is the next in the sequence. Do not attempt to collect a goal if it is not the correct one in the order. 2. **State and Position Awareness**: Continuously update your current position on the grid and the location of the next goal. Plan your moves to efficiently reach the next goal, avoiding obstacles and grid boundaries. 3. **Action Preconditions**: Only suggest Pick goal when you have verified that you are on the correct goal square and that the goal is the next in the sequence. Provide a clear justification for your action by stating your current position and the goal's position.
4. **Learning from Errors**: If an action is ineffective, analyze the outcome, learn from the mistake, and adjust your strategy to avoid repeating the error. State the reason for the error and the adjustment you will make.
5. **Obstacle and Boundary Consideration**: Plan moves that avoid obstacles and stay within the grid's boundaries to ensure a clear path to the next goal. 6. **Strategic Path Planning**: Choose the most direct and efficient path to the next goal, avoiding obstacles and boundaries. Re-evaluate your path after each move.
7. **Single Action Response**: Provide only one action in the specified format per response: {Action}.
8. **Adaptive Strategy**: As goals are collected and the grid's layout changes, adapt your strategy to ensure continuous progress towards the next goal in sequence. 9. **Avoiding Action Loops**: Recognize and break | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
21cd94ae-8d1c-442f-9d0e-c7f34b8770dd | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld2 Best Prompt For Gpt-4 Score = 0.60 (Gpt-4 As The Testing Llm)
obstacles and stay within the grid's boundaries to ensure a clear path to the next goal. 6. **Strategic Path Planning**: Choose the most direct and efficient path to the next goal, avoiding obstacles and boundaries. Re-evaluate your path after each move.
7. **Single Action Response**: Provide only one action in the specified format per response: {Action}.
8. **Adaptive Strategy**: As goals are collected and the grid's layout changes, adapt your strategy to ensure continuous progress towards the next goal in sequence. 9. **Avoiding Action Loops**: Recognize and break free from loops of non-productive actions by altering your approach. Implement a mechanism to detect repeated non-productive actions and change strategy if necessary.
10. **Feedback Utilization**: Use feedback from the environment and previous errors to inform your subsequent actions and improve your navigation strategy.
11. **Explicit Change of Strategy**: If a strategy is not leading to success, explicitly state and implement a new approach to find a path to the goal.
12. **Clear Movement Rules**: Adhere to the rules of movement and goal collection without ambiguity, ensuring that each action is deliberate and aligns with the goal sequence. Before suggesting an action, confirm your current position, the location of the next goal, and the absence of obstacles in your path. Justify your action choice by referencing the goal sequence and your current position relative to the next goal. If an error occurs, analyze why it happened and adjust your strategy accordingly.
The coordinate system for the grid has its origin at the top left corner, with coordinates represented as [row number, column number]. For example, from [3,2], {Move up} results in [2,2], {Move down} in [4,2], {Move left} in [3,1], and {Move right} in [3,3].
Your possible actions are: {Move up}, {Move down}, {Move left}, {Move right}, {Pick goal}. Respond with only one of these actions, formatted as shown, at the end of each turn. Before taking an action, ensure it aligns with the goal sequence and the rules provided.
Blocksworld Human prompt Score = 0.19 (GPT-3.5-turbo-16k-0613 as the testing LLM) Score = 0.33 (GPT | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
981b12f9-4643-4d7f-8f77-6488971e670b | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld2 Best Prompt For Gpt-4 Score = 0.60 (Gpt-4 As The Testing Llm)
{Move left} in [3,1], and {Move right} in [3,3].
Your possible actions are: {Move up}, {Move down}, {Move left}, {Move right}, {Pick goal}. Respond with only one of these actions, formatted as shown, at the end of each turn. Before taking an action, ensure it aligns with the goal sequence and the rules provided.
Blocksworld Human prompt Score = 0.19 (GPT-3.5-turbo-16k-0613 as the testing LLM) Score = 0.33 (GPT-3.5-turbo-0301) Score = 0.71 (GPT-4 as the testing LLM)
I am playing with a set of blocks where I need to arrange the blocks into stacks. Here are the actions I can do Pick up a block Unstack a block from on top of another block Put down a block Stack a block on top of another block I have the following restrictions on my actions: I can only pick up or unstack one block at a time. I can only pick up or unstack a block if my hand is empty. I can only pick up a block if the block is on the table and the block is clear. A block is clear if the block has no other blocks on top of it and if the block is not picked up. I can only unstack a block from on top of another block if the block I am unstacking was really on top of the other block. I can only unstack a block from on top of another block if the block I am unstacking is clear. Once I pick up or unstack a block, I am holding the block. I can only put down a block that I am holding. I can only stack a block on top of another block if I am holding the block being stacked. I can only stack a block on top of another block if the block onto which I am stacking the block is clear. Once I put down or stack a block, my hand becomes empty. Once you stack a block on top of a second block, the second block is no longer clear. Please learn from previous steps. Not purely repeat the actions but learn why the state changes or remains in a dead loop. Avoid being stuck in action loops. Specify your action in this format at the end of your answer: pick up the {}, put down the {}, stack the {} on top of the | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2baa1367-bcb1-41ae-80f1-3371a17d6972 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld2 Best Prompt For Gpt-4 Score = 0.60 (Gpt-4 As The Testing Llm)
I am holding the block being stacked. I can only stack a block on top of another block if the block onto which I am stacking the block is clear. Once I put down or stack a block, my hand becomes empty. Once you stack a block on top of a second block, the second block is no longer clear. Please learn from previous steps. Not purely repeat the actions but learn why the state changes or remains in a dead loop. Avoid being stuck in action loops. Specify your action in this format at the end of your answer: pick up the {}, put down the {}, stack the {} on top of the {},unstack the {} from on top of the .
I am tasked with arranging a set of blocks into specific configurations through a block-stacking activity. My available actions are:
- Pick up a block that is clear and on the table. - Unstack a clear block from the top of another block. - Put down a block onto the table, ensuring my hand is empty afterward. - Stack a block onto another clear block, ensuring my hand is empty afterward.
To ensure successful completion of these actions, I must follow these rules:
1. I can only manipulate one block at a time.
2. My hand must be empty before I can pick up or unstack a block. 3. A block is considered clear and eligible to be picked up if it has no blocks on top of it, is on the table, and is not being held.
4. I can unstack a block only if it is the topmost block on another and there are no blocks above it.
5. When I pick up or unstack a block, I will be holding it. 6. I can only put down or stack a block that I am currently holding.
7. A block can be stacked onto another only if the bottom block is clear.
8. My hand must be empty before and after I place or stack a block.
9. Stacking a block on top of another makes the bottom block non-clear.
To optimize task execution and avoid errors, I will adhere to the following strategies:
- Conduct a comprehensive state verification before each action to ensure all preconditions are met: my hand is empty
before picking up or unstacking; the block is clear, on the table, and not being held for picking up; and I am holding a block before putting down or stacking.
- Maintain an accurate and | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b78f51da-7bf9-4f5a-bfd7-2c80fc366134 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld2 Best Prompt For Gpt-4 Score = 0.60 (Gpt-4 As The Testing Llm)
if the bottom block is clear.
8. My hand must be empty before and after I place or stack a block.
9. Stacking a block on top of another makes the bottom block non-clear.
To optimize task execution and avoid errors, I will adhere to the following strategies:
- Conduct a comprehensive state verification before each action to ensure all preconditions are met: my hand is empty
before picking up or unstacking; the block is clear, on the table, and not being held for picking up; and I am holding a block before putting down or stacking.
- Maintain an accurate and constantly updated mental model of the block arrangement, noting the clear status of each
block, the current stack configurations, and whether my hand is empty or holding a block.
- Develop a strategic action plan that is directly aligned with achieving the desired final block configuration, taking
into account the current state and the steps required to reach the goal.
- Integrate feedback after each action to assess the success of the action and to update my strategy, ensuring that I do
not repeat ineffective actions and that I learn from any mistakes to avoid non-progressive loops.
- Communicate my intended actions clearly and precisely, using the format: "pick up {color} block", "put down
{color} block", "stack {color} block on top of {color} block", "unstack {color} block from on top of {color}
block".
- Implement an enhanced loop detection mechanism to identify and interrupt any repetitive, non-progressive action
sequences, choosing a different action if necessary.
- Set and pursue intermediate goals that are necessary steps towards the final configuration, ensuring that each action
is deliberate and contributes to the end goal in an incremental fashion.
- Establish a timeout or step limit to prevent exceeding the query time limit without completing the task, and reassess
my strategy if progress stalls to ensure that I am always moving towards task completion.
- Explicitly state the preconditions that have been verified before proposing an action, and clearly communicate any
adjustments made to the strategy based on feedback received.
- Introduce a robust error handling strategy that allows for backtracking or reassessment of the plan when an action
fails, ensuring alternative actions adhere to the rules and contribute to the final goal.
By following these refined guidelines and continuously updating my approach based on the state of the blocks and the feedback received, I aim to | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7facfd98-c218-4262-9692-5c3539f2c62c | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Gridworld2 Best Prompt For Gpt-4 Score = 0.60 (Gpt-4 As The Testing Llm)
time limit without completing the task, and reassess
my strategy if progress stalls to ensure that I am always moving towards task completion.
- Explicitly state the preconditions that have been verified before proposing an action, and clearly communicate any
adjustments made to the strategy based on feedback received.
- Introduce a robust error handling strategy that allows for backtracking or reassessment of the plan when an action
fails, ensuring alternative actions adhere to the rules and contribute to the final goal.
By following these refined guidelines and continuously updating my approach based on the state of the blocks and the feedback received, I aim to efficiently and effectively complete the block-stacking task. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
466a700c-bb7a-4ddd-90e4-4cef98a7340f | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Blocksworld Best Prompt For Gpt-4 Score = 0.95 (Gpt-4 As The Testing Llm)
To effectively arrange a set of blocks into the desired stacks, adhere to the following structured approach, which has been refined based on previous feedback and identified errors:
1. **Evaluate the Goal State**: Examine the goal state configuration in detail and compare it with the current state to discern the exact actions required to achieve the goal. Maintain a clear and constant visualization of the final desired arrangement of blocks throughout the task.
2. **Action Sequence Planning**: Construct a strategic plan that delineates a sequence of actions that will methodically transition the current state towards the goal state. Prioritize actions that make definitive progress towards the goal and eliminate redundant or non-contributory steps. 3. **Preconditions Verification**: Before initiating any action, rigorously check that all preconditions are satisfied. Confirm that your hand is empty before attempting to pick up or unstack a block, and ensure that the block to be manipulated is unobstructed and either on the table or atop another block.
4. **Execute Actions**: Implement the necessary actions, strictly following the prescribed format and constraints:
- To pick up a block: "pick up the {color} block."
- To unstack a block: "unstack the {color} block from on top of the {color} block."
- To put down a block: "put down the {color} block." - To stack a block: "stack the {color} block on top of the {color} block."
5. **Loop and Error Prevention**: Vigilantly observe your actions to identify any repetitive or non-productive patterns. Upon detecting a loop, promptly reassess and revise the action plan. Document past errors to prevent their recurrence. 6. **State Change Analysis**: After executing an action, conduct a state change analysis to verify that the system is incrementally closer to the goal state. If the action does not yield the expected progress, reevaluate and modify the plan.
7. **Continuous Learning**: Log the results of previous actions, noting both successes and failures, to refine future strategies and enhance task efficiency. 8. **Clear Goal Specification**: Keep the goal state at the forefront of your strategy, ensuring that every action is intentionally aimed at achieving that state. 9. **Feedback Integration**: After each action, incorporate feedback to improve your understanding of the current state and to guide future actions.
10. **Loop | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9c8b3ca2-cd06-4124-90a9-d95f8f49ea2b | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Blocksworld Best Prompt For Gpt-4 Score = 0.95 (Gpt-4 As The Testing Llm)
the system is incrementally closer to the goal state. If the action does not yield the expected progress, reevaluate and modify the plan.
7. **Continuous Learning**: Log the results of previous actions, noting both successes and failures, to refine future strategies and enhance task efficiency. 8. **Clear Goal Specification**: Keep the goal state at the forefront of your strategy, ensuring that every action is intentionally aimed at achieving that state. 9. **Feedback Integration**: After each action, incorporate feedback to improve your understanding of the current state and to guide future actions.
10. **Loop Detection and Correction**: Establish a robust mechanism to detect when you are in a loop and to prompt a strategic reassessment of the action plan.
11. **Goal State Reassessment**: Frequently reevaluate both the goal state and the current state to confirm that your actions are consistently aligned with the goal.
12. **Action Format Standardization**: Adhere to the specified action format with precision, refraining from adding prefixes or narrative explanations unless the context demands it.
13. **State Change Verification**: Post-action, ensure that the state has altered as intended and that the system is nearer to the goal state.
14. **Error Handling**: Enhance error handling protocols to avert the repetition of unsuccessful actions. 15. **Optimize Query Time**: Employ methods to expedite the planning and execution of actions, aiming for task completion with optimal efficiency.
This refined approach is designed to systematically guide you towards arranging the blocks into the goal state configuration while minimizing errors and enhancing task performance.
Logistics Human prompt
Score = 0.083 (GPT-3.5-turbo-16k-0613 as the testing LLM) Score = 0.12 (GPT-3.5-turbo-0301) Score = 0.50 (GPT-4 as the testing LLM)
You have to plan logistics to transport packages within cities via trucks and between cities via airplanes. Locations within a city are directly connected (trucks can move between any two such locations), and so are the cities. In each city there is exactly one truck and each city has one location that serves as an airport. Here are the actions that can be performed:
Load a package into a truck at a location. Load a package into an airplane at a location | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d4802e1c-104f-4501-82c7-90c955d2244d | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Blocksworld Best Prompt For Gpt-4 Score = 0.95 (Gpt-4 As The Testing Llm)
0.12 (GPT-3.5-turbo-0301) Score = 0.50 (GPT-4 as the testing LLM)
You have to plan logistics to transport packages within cities via trucks and between cities via airplanes. Locations within a city are directly connected (trucks can move between any two such locations), and so are the cities. In each city there is exactly one truck and each city has one location that serves as an airport. Here are the actions that can be performed:
Load a package into a truck at a location. Load a package into an airplane at a location. Unload a package from a truck at a location. Unload a package from an airplane at a location. Drive a truck from one location to another location within a city. Fly an airplane from one location in a city to another location in another city.
The following are the restrictions on the actions:
A package can be loaded into a truck only if the package and the truck are in the same location.
Once a package is loaded into a truck, the package is not at the location and is in the truck.
A package can be loaded into an airplane only if the package and the airplane are in the same location.
Once a package is loaded into an airplane, the package is not at the location and is in the airplane.
A package can be unloaded from a truck only if the package is in the truck.
Once a package is unloaded from a truck, the package is not in the truck and is at the location of the truck.
A package can be unloaded from an airplane only if the package in the airplane. Once a package is unloaded from an airplane, the package is not in the airplane and is at the location of the airplane.
A truck can be driven from one location to another if the truck is at the from-location and both from-location and to-location are locations in the same city. Once a truck is driven from one location to another, it is not at the from-location and is at the to-location.
An airplane can be flown from one city to another if the from-location and the to-location are airports and the airplane is at the from-location. Once an airplane is flown from one city to another the airplane is not at the from-location and is at the to-location. Please learn from previous steps | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1ed21157-3aab-4588-a9be-01eecf994b07 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Blocksworld Best Prompt For Gpt-4 Score = 0.95 (Gpt-4 As The Testing Llm)
truck can be driven from one location to another if the truck is at the from-location and both from-location and to-location are locations in the same city. Once a truck is driven from one location to another, it is not at the from-location and is at the to-location.
An airplane can be flown from one city to another if the from-location and the to-location are airports and the airplane is at the from-location. Once an airplane is flown from one city to another the airplane is not at the from-location and is at the to-location. Please learn from previous steps. Not purely repeat the actions but learn why the state changes or remains in a dead loop. Avoid being stuck in action loops. Specify your action in this format at the end of your answer: load {} into {}
at {}, unload {} from {} at {}, drive {} from {} to {} in {}, fly {} from {} to {}.
To optimize the logistics of transporting packages within cities using trucks and between cities using airplanes, follow these enhanced and precise guidelines:
1. **Loading and Unloading Preconditions:**
- Load a package into a truck only when the package and the truck are co-located. - Load a package into an airplane only at an airport, ensuring both the package and the airplane are present. - Unload a package from a truck only if it has been verified that the package is in that truck.
- Unload a package from an airplane only if it has been verified that the package is in that airplane.
2. **Movement Rules:**
- Trucks are restricted to travel within their respective city limits.
- Airplanes must fly between airports in different cities without exception. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
30f9cefd-953e-4f11-a44f-2d0e550cdae1 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## 3. **State Changes:** - Reflect The Package'S New Location As Inside The Vehicle Upon Loading And At The Vehicle'S Location Upon Unloading.
4. **Action Format:**
- Actions must be articulated as follows:
- For loading/unloading: "load {package} into {vehicle} at {location}" or "unload {package} from {vehicle} at
{location}"
- For driving: "drive {truck} from {from-location} to {to-location} in {city}"
- For flying: "fly {airplane} from {from-airport} to {to-airport}"
5. **Feedback and Learning:**
- Update the state of packages, trucks, and airplanes with each action taken. - Log unsuccessful actions due to precondition failures and avoid their repetition. - Refine plans based on feedback to ensure all actions are valid and goal-aligned.
6. **Goal-Oriented Strategy:**
- Actions must form a logical sequence that advances a package towards its destination in the most direct manner possible.
7. **Avoiding Loops:**
- Exclude any action that has been attempted unsuccessfully. - Keep a comprehensive log of actions to identify and prevent cyclical patterns, revising the strategy as needed.
8. **Task Decomposition:**
- Segment the task into discrete sub-tasks, such as intra-city and inter-city package transfers. - Tackle each sub-task systematically, one at a time.
9. **Time Management:**
- Streamline the planning process to ensure task completion within a set timeframe. - Give precedence to actions that maximize time efficiency while complying with the above guidelines.
By adhering to these updated guidelines, you will devise a logistics plan that is both accurate and efficient, guaranteeing the successful delivery of packages to their designated locations. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
379f1dcb-26d9-47ec-a697-19e46f4efb09 | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Logistics Best Prompt For Gpt-4 Score = 0.74 (Gpt-4 As The Testing Llm)
Your task is to manage the logistics of transporting packages within and between cities using trucks and airplanes.
Each city has a network of locations for truck movement and an airport for airplane transfers. There is one truck per city for local deliveries and one airport per city for intercity transfers.
To enhance logistics operations and avoid errors, follow these optimized steps:
1. **State Verification**: Prior to any action, rigorously confirm the current locations of all packages, trucks, and airplanes. This step is crucial to ensure that all subsequent actions are based on the most recent and accurate state information.
2. **Action Execution**: Execute actions strictly adhering to these preconditions:
- Load a package into a truck at a location only if the package and the truck are confirmed to be at that location.
- Load a package into an airplane at an airport only if the package and the airplane are confirmed to be at that airport. - Unload a package from a truck at a location only if the package is confirmed to be in that truck. - Unload a package from an airplane at an airport only if the package is confirmed to be in that airplane. - Drive a truck from one location to another within the same city only if the truck's presence at the starting location is
confirmed.
- Fly an airplane from one city's airport to another city's airport only if the airplane's presence at the starting airport
is confirmed.
3. **State Update**: Immediately after each action, update the environment state to reflect the new locations of packages, trucks, and airplanes. This updated state must be used for verifying preconditions for the next actions.
4. **Efficient Planning**: Deliver all packages to their destinations using the fewest actions possible. Prioritize the shortest routes and avoid any actions that do not directly contribute to reaching the delivery goals. 5. **Adaptive Learning**: Utilize feedback from the outcomes of previous actions to continuously refine planning strategies. Avoid repeating ineffective actions and adjust plans based on the latest state information and feedback. 6. **Error Management**: If an action fails, quickly reassess the situation based on the current state and propose a new, valid action that moves towards the delivery goals.
7. **Clear Action Formatting**: Clearly express actions using the specified structure to avoid misunderstandings:
- load {package} into {truck/airplane} at {location/airport | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
02181ee9-752f-49bf-9618-19a3518bf0ce | # Prompt Optimization In Multi-Step Tasks (Promst): Integrating Human Feedback And Preference Alignment
## Logistics Best Prompt For Gpt-4 Score = 0.74 (Gpt-4 As The Testing Llm)
directly contribute to reaching the delivery goals. 5. **Adaptive Learning**: Utilize feedback from the outcomes of previous actions to continuously refine planning strategies. Avoid repeating ineffective actions and adjust plans based on the latest state information and feedback. 6. **Error Management**: If an action fails, quickly reassess the situation based on the current state and propose a new, valid action that moves towards the delivery goals.
7. **Clear Action Formatting**: Clearly express actions using the specified structure to avoid misunderstandings:
- load {package} into {truck/airplane} at {location/airport}
- unload {package} from {truck/airplane} at {location/airport} - drive {truck} from {location} to {location} in {city}
- fly {airplane} from {airport} to {airport}
8. **Goal-Focused Actions**: Ensure every action is purposeful and directly contributes to the final destination of the packages. Eliminate any actions that are not goal-oriented. 9. **Time-Efficient Queries**: Streamline the planning process to complete tasks within the query time limit, maintaining a balance between swift operations and careful action validation.
10. **Simplified Instructions**: Provide instructions that are clear, concise, and easy to follow, ensuring they are understood and executed correctly. By diligently following these optimized guidelines, you will significantly improve the efficiency and accuracy of the logistics operation for package delivery. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.08702v1.md",
"file_path": "paper_data/2402.08702v1.md",
"file_size": 122970,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cf90d6fe-f20f-4278-9a4b-94b9e091d748 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
Yihao Fang1,3, Stephen W. Thomas1 **and Xiaodan Zhu**2,3
1Smith School of Business, Queen's University
2Department of Electrical and Computer Engineering, Queen's University
3Ingenuity Labs Research Institute, Queen's University yihao.fang@gmail.com, {stephen.thomas, xiaodan.zhu}@queensu.ca | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a642b3c9-61a9-4e88-8beb-9fbbe28fe30e | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Abstract
With the widespread adoption of large language models (LLMs) in numerous applications, the challenge of factuality and the propensity for hallucinations raises significant concerns. To address this issue, particularly in retrieval-augmented in-context learning, we introduce the hierarchical graph of thoughts
(HGOT), a structured, multi-layered graph approach designed to enhance the retrieval of pertinent passages during in-context learning.
The framework utilizes the emergent planning capabilities of LLMs, employing the divideand-conquer strategy to break down complex queries into manageable sub-queries. It refines self-consistency majority voting for answer selection, which incorporates the recently proposed citation recall and precision metrics to assess the quality of thoughts, linking an answer's credibility intrinsically to the thought's quality.
This methodology introduces a weighted system in majority voting, prioritizing answers based on the citation quality of their thoughts.
Additionally, we propose a scoring mechanism for evaluating retrieved passages, considering factors such as citation frequency and quality, self-consistency confidence, and the retrieval module's ranking. Experiments reveal that HGOT outperforms other retrievalaugmented in-context learning methods, including Demonstrate-Search-Predict (DSP), ReAct, Self-Ask, and Retrieve-then-Read on different datasets by as much as 7%, demonstrating its efficacy in enhancing the factuality of LLMs. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6c5d57c5-0d04-4eda-9a06-b41fc4b7f99b | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 1 Introduction
The advancement of large language models (LLMs)
(Devlin et al., 2019; Raffel et al., 2020; Radford et al., 2018, 2019; Brown et al., 2020) has revolutionized many fields of NLP and a wide variety of applications, offering unprecedented capabilities of natural language understanding and generation. However, a critical challenge of these models is the tendency to "hallucinate" (Maynez et al.,
2020; Raunak et al., 2021; Bouyamourn, 2023)— generating content that is factually incorrect or not grounded in reality. This issue raises significant concerns about the reliability and trustworthiness of LLMs, particularly in high-stakes applications.
While numerous efforts have been made to address various aspects of this problem, a critical area that demands attention is retrieval-augmented in-context learning (Lazaridou et al., 2022; Izacard et al., 2022; Press et al., 2022; Khattab et al., 2022), a process where LLMs leverage external information to enhance their responses, which is the focus of our study.
In response to the challenge of hallucinations, we introduce the hierarchical graph of thoughts
(HGOT) framework, drawing inspiration from neuropsychological studies on the "hierarchy of goals" and working memory (Cowan, 2010; Jonides et al., 2008; Cowan, 2005). Our approach redefines how LLMs interact with and utilize external information sources. By constructing a structured, multilayered graph (Ying et al., 2018; Chen et al., 2022), HGOT allows for a more organized and efficient way of sourcing and incorporating relevant information, thereby reducing the incidence of hallucinations in LLMs. Despite these advances, the challenges that we need to overcome involve dynamically constructing a hierarchical graph, as well as evaluating and ranking the qualities of thoughts and retrieved passages in this complex structure.
The HGOT framework places a strong emphasis on the dynamic creation of a hierarchical graph structure by exploring the applicability of the emergent planning capabilities of LLMs (Wang et al., 2023a; Valmeekam et al., 2023) in breaking down complex queries (higher in the hierarchy) into simpler sub-queries (lower in the hierarchy). This method employs a divide-and-conquer strategy, which simplifies the problem-solving process and improves the accuracy and relevance of the information retrieved by the LLM.
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
edc7b855-5469-4712-9b5e-b54fb34591e1 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 1 Introduction
the qualities of thoughts and retrieved passages in this complex structure.
The HGOT framework places a strong emphasis on the dynamic creation of a hierarchical graph structure by exploring the applicability of the emergent planning capabilities of LLMs (Wang et al., 2023a; Valmeekam et al., 2023) in breaking down complex queries (higher in the hierarchy) into simpler sub-queries (lower in the hierarchy). This method employs a divide-and-conquer strategy, which simplifies the problem-solving process and improves the accuracy and relevance of the information retrieved by the LLM.
Another key feature of the HGOT framework is the improvement of the self-consistency majority voting mechanism (Wang et al., 2023b) used in LLMs, which enhances the quality assessment of thoughts or rationales. This improvement assesses the quality of thoughts or rationales generated by the LLMs. The method utilizes metrics such as citation recall and precision (Gao et al., 2023) to evaluate the quality of the information used by the LLMs in forming their responses. The underlying premise is that the quality of an LLM's response is directly related to the quality of its underlying thought. Therefore, in the majority voting process, responses are given weights based on the citation quality of their thoughts.
Furthermore, the HGOT framework proposes a scoring mechanism to evaluate the quality of retrieved passages. This mechanism takes into account various factors, including the frequency of passage citation, the citation quality (Gao et al., 2023) of the thought, self-consistency confidence score (Xiong et al., 2023; Wang et al., 2023b), and the retrieval module ranking. By considering these diverse factors, the mechanism ensures that the information utilized in the LLM's response generation is both relevant and of high quality.
To validate the effectiveness of the proposed method, we selected FEVER (Thorne et al., 2018), Open-SQuAD (Rajpurkar et al., 2016; Karpukhin et al., 2020), and HotPotQA (Yang et al., 2018) to evaluate the models' proficiency in fact retrieval and reasoning. We divided these datasets into three groups: "Long", "Medium", and "Short", according to the question length, emphasizing sampling from the tails of the distribution, a detail that is frequently overlooked in studies. Our experiments demonstrate that the hierarchical graph of thoughts
(HGOT | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a7b73e8e-f30d-4990-a2e3-a27f1a0b8ddb | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 1 Introduction
To validate the effectiveness of the proposed method, we selected FEVER (Thorne et al., 2018), Open-SQuAD (Rajpurkar et al., 2016; Karpukhin et al., 2020), and HotPotQA (Yang et al., 2018) to evaluate the models' proficiency in fact retrieval and reasoning. We divided these datasets into three groups: "Long", "Medium", and "Short", according to the question length, emphasizing sampling from the tails of the distribution, a detail that is frequently overlooked in studies. Our experiments demonstrate that the hierarchical graph of thoughts
(HGOT) approach outperforms existing retrievalaugmented in-context learning methods such as Demonstrate-Search-Predict (DSP) (Khattab et al., 2022), ReAct (Yao et al., 2023b), Self-Ask (Press et al., 2022), and Retrieve-then-Read (Lazaridou et al., 2022; Izacard et al., 2022), underscoring the robustness and efficacy of our approach in enhancing LLMs' factuality.
In brief, we make the following contributions:
- We introduce HGOT and investigate LLM's
(emergent) planning ability in breaking down complex queries for graph construction.
- **Thought Quality:** HGOT selects the best answer by voting which involves assessing thought quality with citation recall and precision metrics.
- **Retrieval Quality:** We propose a scoring mechanism for evaluating retrieved passages based on citation frequency and quality, self-consistency confidence, and retrieval module ranking.
- We conduct extensive experiments on FEVER,
Open-SQuAD, and HotPotQA, emphasizing sampling from the extremes of the distribution. The results demonstrate that HGOT surpasses other methods by as much as 7%. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
47d7e95d-6ab7-46f4-83f7-164908f60c67 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 2 Related Work
The "Retrieve-then-Read" pipeline (Lazaridou et al., 2022; Izacard et al., 2022) sends queries to a retrieval model (RM) to gather passages for a prompt that a language model (LM) uses for response generation. "Self-ask" (Press et al., 2022) and "Iterative Retriever, Reader, and Reranker"
(IRRR) (Qi et al., 2020) improve upon this approach through multi-hop retrieval, enabling the LM to ask follow-up questions that the RM answers. These answers, combined with the original prompt, enhance the LM's ability to respond to the initial question.
"ReAct" (Yao et al., 2023b) uses LLMs to generate reasoning traces and task-specific actions in an interleaved manner. While reasoning traces help the model induce, actions allow it to interface with external sources. Baleen (Khattab et al., 2021) summarizes multiple passages of information in each hop to be used in subsequent iterations. The "Demonstrate-Search-Predict" (DSP) approach (Khattab et al., 2022) enhances the multihop methodologies by automatically annotating
"chain-of-thought" (Wei et al., 2022) demonstrations. The potential weakness of those multi-hop pipelines lies in the generality and adaptability of their search operations. Especially, those pipelines face challenges when tasked with addressing inquiries that necessitate intricate planning for the retrieval of pertinent information.
Plan-and-Solve (PS) Prompting (Wang et al.,
2023a) involves breaking down complex tasks into manageable subtasks and executing them according to a formulated plan, with PS+ prompting enhancing reasoning quality through detailed instructions. However, PS hasn't yet utilized LLMs' planning capabilities with retrieval-augmented incontext learning. Other methods such as the "tree of thoughts" (Yao et al., 2023a), "graph of thoughts"
(Besta et al., 2023), and RECURRENTGPT (Zhou et al., 2023) explore reasoning via tree, graph, or Dance and Laugh Amongst the Rotten is a studio Album by a band from which country ?
recurrent structures to improve problem-solving, but they face challenges in sourcing relevant information, suffering | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7bfc8475-07b4-4f5a-b04b-78b49504ce94 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 2 Related Work
with retrieval-augmented incontext learning. Other methods such as the "tree of thoughts" (Yao et al., 2023a), "graph of thoughts"
(Besta et al., 2023), and RECURRENTGPT (Zhou et al., 2023) explore reasoning via tree, graph, or Dance and Laugh Amongst the Rotten is a studio Album by a band from which country ?
recurrent structures to improve problem-solving, but they face challenges in sourcing relevant information, suffering from drawbacks concerning the factual reliability of large language models. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ddc87dd7-3878-4228-a0a2-e4a0e1d0635e | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 3 Methodology
The Hierarchical Graph of Thoughts (HGOT)
framework involves creating a multi-layered graph that allows for a more organized and efficient sourcing and incorporation of relevant information. This structure aims to reduce the occurrence of hallucinations in LLMs. However, the initial challenges that we need to overcome involve dynamically constructing hierarchical graphs, along with assessing and ranking the qualities of thoughts and retrieved passages within this complex structure.
In terms of hierarchical graph construction, the HGOT framework utilizes the emergent planning ability of LLMs to break down complex queries into smaller, more manageable sub-queries, following a divide-and-conquer strategy.
To select the best answer for a query, the framework employs a method of improving selfconsistency majority voting (Wang et al., 2023b).
This involves assessing the quality of thoughts using citation recall and precision metrics and weighing answers based on the citation quality of their thoughts.
Additionally, a scoring mechanism is proposed for evaluating the quality of retrieved passages.
This mechanism takes into account various factors such as the frequency of passage citation, the quality of citations in the thoughts, a self-consistency confidence score adjusted for citation quality, and the ranking provided by the retrieval module. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eeb99d2c-8707-4598-8268-56852b715ae8 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 3.1 Hierarchical Graph Construction, Search, And Inference
Graph Construction: When utilizing the emergent planning ability to break down a complex question into smaller, more manageable sub-queries or steps, it's crucial to recognize that these sub-queries or steps are not standalone. Instead, they often exhibit interconnections that contribute to forming a complete answer. These steps and their connections create a dependency graph within a deeper level of the hierarchical graph, which guides the exploration of the complex question. (In this framework, the dependency graph is designed as a directed acyclic graph to avoid circular dependencies.) Further, each sub-query can be extended into a more detailed dependency graph at even deeper levels of the hierarchy. For example, as illustrated in Figure 1: A
⃝, a query at the initial layer (Layer 1 or L1) can be extended into a dependency graph at a subsequent layer (Layer 2 or L2). Within L2, the first step could unfold into a four-step dependency graph in the next layer (Layer 3 or L3), while the third step in L2 might lead to a two-step dependency graph at the same third layer (L3).
Establishing a precise dependency graph is essential before progressing to the subsequent stage, as any error or ambiguity at this stage could significantly derail the solution path. To accurately infer this graph, there are several strategies that we can adopt. Initially, we employed the "Probe" procedure to gather references (referenced in Figure 1:
1⃝ and Appendix C.5). This involves collecting passages from the retrieval model and then scoring these passages by prompting LLM to probe for an answer. The specifics of how passages are scored will be discussed in Section 3.3.
Subsequently, we designed the prompt template for the "Plan" procedure (Figure 1: 2⃝ and Appendix C.1). This template incorporates instructions, demonstrations (see Appendix D), and the collected passages. The aim is to stimulate the LLM and guide it towards a holistic understanding of the question and its interconnected components.
Once the "Plan" procedure is complete, we introduce the self-reflection technique (Appendix C.2), inspired by the work of Shinn et al. (2023). This involves prompting the LLM again to double-check if the output dependencies are accurate and align with the question in each step. The method encourages the LLM to focus internally on the dependencies without external influence, by providing only related steps or sub | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dec7d8b0-d20f-4a98-809c-b5369a977ba2 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 3.1 Hierarchical Graph Construction, Search, And Inference
template incorporates instructions, demonstrations (see Appendix D), and the collected passages. The aim is to stimulate the LLM and guide it towards a holistic understanding of the question and its interconnected components.
Once the "Plan" procedure is complete, we introduce the self-reflection technique (Appendix C.2), inspired by the work of Shinn et al. (2023). This involves prompting the LLM again to double-check if the output dependencies are accurate and align with the question in each step. The method encourages the LLM to focus internally on the dependencies without external influence, by providing only related steps or sub-queries. Finally, we formalize these dependencies into a structure that is more compatible with programming language formats
(Appendix C.3).
Search: A crucial aspect of this stage involves using topological sorting and rewriting, as shown in Figure 1: 3⃝. Topological sorting within a dependency graph (i.e., a directed acyclic graph) ensures that steps influencing subsequent steps are processed in a sequential order. When evaluating a step or a sub-query, a "Probe" procedure is employed (refer to Figure 1: 1⃝), which gathers passages from the retrieval model and instructs the LLM to search for an answer by using the subquery. In the context of the dependency graph, when Step 2 is contingent on Step 1, the question in Step 2 is rewritten (see Appendix C.4) to include the sub-query from Step 1 along with the answer obtained from the "Probe" procedure. This process ensures that the interconnections are wellarticulated and traceable within the graph.
The "Probe" procedure for each sub-query does more than seek answers; it also gathers and scores relevant passages. Additionally, the "Plan" procedure is applied to each sub-query to create a dependency graph at a deeper level. Following this, the "Search" procedure (Figure 1: 3⃝) investigates the dependency graph topologically, and the "Infer"
procedure (Figure 1: 4⃝) is then utilized to calculate the final scores for all the passages collected in the earlier stages, to predict the answer, and to determine the confidence score. In each step or sub-query assessed during the "Search" procedure, the "Probe", "Plan", "Search", and "Infer" procedures are recursively executed until a | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e62b4c57-1687-4f18-8ec2-e1b16bece837 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 3.1 Hierarchical Graph Construction, Search, And Inference
applied to each sub-query to create a dependency graph at a deeper level. Following this, the "Search" procedure (Figure 1: 3⃝) investigates the dependency graph topologically, and the "Infer"
procedure (Figure 1: 4⃝) is then utilized to calculate the final scores for all the passages collected in the earlier stages, to predict the answer, and to determine the confidence score. In each step or sub-query assessed during the "Search" procedure, the "Probe", "Plan", "Search", and "Infer" procedures are recursively executed until a specified depth of the graph is achieved, or the "Plan" procedure opts to stop further progression. Specifically, the termination condition is activated if the "Plan" procedure results in only a single step that closely resembles the sub-query being planned. The similarity between them is assessed using the cosine similarity of their BERT-based sentence embeddings (Reimers and Gurevych, 2019).
Inference:
Having the hierarchical graph of thoughts and their related passages collected from the retrieval model, the "Infer" procedure predicts the final answer to the query (Figure 1: 4⃝). Specifically, this procedure ranks all passages retrieved during the examination of the query and its subqueries, as will be explained in Section 3.3. It subsequently selects the top K passages with the highest rankings to use as the prompt for LLM.
Along with demonstrations and instructions, the
"Infer" procedure asks LLM to think step by step, predicts the final answer, and estimates the confidence score (Appendix C.5 and Appendix D). The algorithm for recursive planning, searching, and inferring within HGOT is detailed as follows: | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cb4f66cb-9cc2-4e23-a1cd-9f4c6665972c | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 3.2 Thought Quality
When assessing the quality of thoughts, we establish tuples (τ1, a1), ..., (τm, am) as pairs of LLM-
generated thoughts (rationales) and answers, as shown in Figure 1: 1⃝, 4⃝, and B
⃝. The quality of a thought τi is determined by modifying the concepts of citation recall (REC) and citation precision
(PREC) as introduced by Gao et al. (2023), in the following manner:
$$\rho_{i}:=\alpha\cdot1+\beta\cdot\text{REC}(\tau_{i})+\gamma\cdot\text{PREC}(\tau_{i})\tag{1}$$
Assuming there are d distinct responses
ˆa1*, ...,* ˆad, with d being less than or equal to m, we improve upon the self-consistency majority voting method (Wang et al., 2023b) by factoring in the thought qualities, defining the selected answer as: | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7d3227dd-10e4-4dce-9216-a967a67a88f6 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Algorithm 1 Hgot Traversal
▷ Let q be a question
▷ Let a be an answer. e.g., aq is the answer to q
▷ Let G be a dependency graph (i.e., a directed acyclic
graph) ▷ Let **CTX** be the context (incl. passages and scores)
▷ Let CI be a confidence score ▷ Let d be the level of depth in the hierarchical
1:
2: **procedure** TRAVERSE(*q, d*)
3:
aq, CIq, **CTX**q ← PROBE(q)
4:
G ← PLAN(q, **CTX**q)
5:
if STOP(q, G, d) then
6:
return aq, CIq, **CTX**q
7:
else
8:
CTXG ← SEARCH(G, d + 1)
9:
aq, CIq, **CTX** ← INFER(q, CTXq, **CTX**G)
10:
return aq, CIq, CTX
11:
end if
12: end procedure
13:
14: **procedure** SEARCH(G, d)
15:
q1*, ..., q*r ← TOPOLOGICAL_SORT(G)
16:
for i in 1*...r* do
17:
qi ← REWRITE(qi, IN_NEIGHBORS(qi, G))
18:
aqi, CIqi, **CTX**qi ← TRAVERSE(qi, d)
19:
end for
20:
return **CTX**q1*, ...,* **CTX**qr
21: end procedure
$$\hat{a}^{*}=\operatorname*{arg\,max}_{\hat{a}_{h}\in\{\hat{a}_{1},...,\hat{a}_{d}\}}\sum_{i=1}^{m}\rho_{i}\delta(a_{i},\hat{a}_{h})\tag{2}$$
where $\delta$ is the Kronecker delta function, which equals 1 when the variables are the same and 0 otherwise.
Moreover, | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b369a554-6a05-4cef-b363-04556adabf83 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Algorithm 1 Hgot Traversal
21: end procedure
$$\hat{a}^{*}=\operatorname*{arg\,max}_{\hat{a}_{h}\in\{\hat{a}_{1},...,\hat{a}_{d}\}}\sum_{i=1}^{m}\rho_{i}\delta(a_{i},\hat{a}_{h})\tag{2}$$
where $\delta$ is the Kronecker delta function, which equals 1 when the variables are the same and 0 otherwise.
Moreover, we develop the self-consistency confidence score (Xiong et al., 2023) by taking into account the thought qualities. This is defined as:
$$\mathbf{CI}=\frac{\sum_{i=1}^{m}\rho_{i}\delta(a_{i},\hat{a}^{*})}{\sum_{i=1}^{m}\rho_{i}}\tag{3}$$
Note that when $\alpha$ equals 1 and both $\beta$ and $\gamma$ are zero, these equations are simplified to the prediction and calibration based on self-consistency (Wang et al., 2023b; Xiong et al., 2023). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a54130df-e688-40d9-add4-1ad242199043 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 3.3 Retrieval Quality
Assessing the quality of retrieved passages considers multiple aspects. These include how often the passage is cited, the quality of these citations (Gao et al., 2023), a self-consistency confidence score
(Xiong et al., 2023), and the ranking given by the retrieval module (Figure 1: C
⃝).
Assume p is a particular passage retrieved, which serves as a part of the context in the "Probe" or "Infer" procedures. The pairs (τ1, a1), ..., (τm, am)
represent the generated thoughts (rationales) and answers produced when using ChatGPT with a temperature greater than zero. Statements or sentences s1*, ..., s*lτi are parts of τi. The process of natural language inference (denoted as a function NLI)
and a citation marker at the end of each statement
(denoted as M) work together to determine if a statement sj cites passage p, resulting in a value of either true or false. This is formally expressed as:
$$\hat{\delta}(p,s_{j})=\begin{cases}1,&\text{if}\mathbf{M}(p,s_{j})\text{or}\mathbf{NLI}(p,s_{j})\\ 0,&\text{otherwise}\end{cases}\tag{4}$$
We further define the "weighted citation frequency per thought" for a given passage p, as the total number of citations in τi, adjusted by the quality of the thought τi. Formally, it is presented as:
ν(*p, τ*i) = ρi
j=0 ˆδ(*p, s*j) (5) lτi �
The "weighted citation frequency" is the aggregate of these "weighted citation frequencies per thought" across all thoughts, and is denoted by:
ˆν(p) = i=0 ν(*p, τ*i) (6) m �
Next, we normalize this "weighted citation frequency" so that the highest value among all passages is equal to 1. The "normalized weighted citation frequency" is thus:
¯ν(p) = ˆν(p) max1≤k≤n � | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
59acdedf-c3a8-43de-ae03-1ae0d82f5861 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 3.3 Retrieval Quality
(5) lτi �
The "weighted citation frequency" is the aggregate of these "weighted citation frequencies per thought" across all thoughts, and is denoted by:
ˆν(p) = i=0 ν(*p, τ*i) (6) m �
Next, we normalize this "weighted citation frequency" so that the highest value among all passages is equal to 1. The "normalized weighted citation frequency" is thus:
¯ν(p) = ˆν(p) max1≤k≤n ˆν(p) (7)
Finally, during the "Probe" or "Infer" procedures, the quality score of the passage p is updated repetitively, starting with the initial score σ(p, 0) provided by the search engine in the "Probe" procedure. The formula is expressed as follows:
σ(*p, t* + 1) ← ⃗wT · σ(*p, t*) ¯ν(p) CI (8)
where ⃗w = (w1, w2, w3) is a hyperparameter vector that can be tuned for different datasets, retrieval models and large language models. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
33836227-5960-4997-b389-cfa9a68e70ec | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 4 Data
We evaluate HGOT across three datasets: FEVER
(Thorne et al., 2018), Open-SQuAD (Rajpurkar et al., 2016; Karpukhin et al., 2020), and HotPotQA
(Yang et al., 2018). Considering the use of sentence length as a parameter for estimating complexity has been implemented in various NLP tasks (Platanios et al., 2019; Spitkovsky et al., 2010), to assess HGOT across different complexity levels, we stratify the three datasets based on sentence length, categorizing them into long, medium, and short.
The sentence length, measured by the number of tokens in a question, from the FEVER, Open- SQuAD, and HotPotQA datasets is illustrated in Figure 2. The median number of tokens in FEVER is 27, with a long tail of instances extending beyond the median (indicating possible complexity in reasoning, see Appendix B for a more in-depth examination of the data). Open-SQuAD and HotPotQA likewise exhibited a similar distribution. The training, development, and test distributions align well with each other, enabling the stratification of these datasets by sentence length.
| FEVER | Open-SQuAD |
|-----------------------------------------------|----------------------------------------------|
| Len. | Train Dev Test Train Dev Test Train Dev Test |
| Long | 1619 113 113 1174 121 118 1504 168 137 |
| Medium 2182 150 150 1181 133 159 1628 181 148 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
bbe6e951-56ab-428d-9d9a-4cfcaae21d43 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 4 Data
| Train Dev Test Train Dev Test Train Dev Test |
| Long | 1619 113 113 1174 121 118 1504 168 137 |
| Medium 2182 150 150 1181 133 159 1628 181 148 | |
| Short | 2182 150 150 1181 133 159 1628 181 148 |
Questions from FEVER and Open-SQuAD that exceed the 98.5th percentile in length are categorized as complex or long, while for HotPotQA, this categorization applies to questions above the 98th percentile. For questions of FEVER and Open-
SQuAD that fall between the 1.5th and 98.5th percentiles, they are defined as medium length or medium difficulty, and for HotPotQA, this range is from the 2nd to the 98th percentile. Within this group of medium-length questions, about 1.5% of those from FEVER and Open-SQuAD are chosen for evaluation, compared to 2% of HotPotQA questions. Additionally, questions from FEVER and Open-SQuAD below the 1.5th percentile are labelled as simple or short, similar to those under the
2nd percentile for HotPotQA questions. Lastly, Table 1 displays the total number of examples across all three datasets, spanning nine categories.
Metrics: For Open-SQuAD and HotPotQA, we utilize the Exact Match (EM) and F1 scores (Rajpurkar et al., 2016). The EM score identifies the proportion of predictions that precisely align with the correct answers, while the F1 score assesses the average token overlap between the prediction and the correct answer. For FEVER, we only use EM, considering the answers in FEVER being limited to three tokens or fewer | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3b5c63cb-3854-4b42-a3cb-6d34711a8101 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 4 Data
2nd percentile for HotPotQA questions. Lastly, Table 1 displays the total number of examples across all three datasets, spanning nine categories.
Metrics: For Open-SQuAD and HotPotQA, we utilize the Exact Match (EM) and F1 scores (Rajpurkar et al., 2016). The EM score identifies the proportion of predictions that precisely align with the correct answers, while the F1 score assesses the average token overlap between the prediction and the correct answer. For FEVER, we only use EM, considering the answers in FEVER being limited to three tokens or fewer. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4511cf02-6a78-439d-9e2c-daef482d6a5b | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 5 Evaluation Setup
Baselines: Our benchmarking includes five approaches: "Vanilla LM" (Brown et al., 2020),
"Retrieve-then-Read" (Lazaridou et al., 2022; Izacard et al., 2022), "Self-ask" (Press et al., 2022),
"ReAct" (Yao et al., 2023b), and "Demonstrate-
Search-Predict" (DSP) (Khattab et al., 2022). See Appendix E for further details.
Implementation Details:
All approaches employed ChatGPT (gpt-3.5-turbo-1106) as the backbone LLM, with the exception of ReAct, which utilized text-davinci-002, given that ReAct's source code1 is not fully compatible with gpt-3.5-turbo-
1106. For the retrieval model, we used the Google Search API provided by SerpApi.com, following the "Self-ask" approach (Press et al., 2022).
HGOT2 was implemented using Python language and the DSP framework (Khattab et al., 2022). Following Gao et al. (2023), We adopt a natural language inference (NLI) model (Honovich et al., 2022) in HGOT to measure thought quality and retrieval quality. Additionally, the topological sorting and deductions pertaining to HGOT were performed using the Python NetworkX3 package. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
666bb142-e0c0-46d6-b228-535f3491a38e | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 6 Experimental Results
Findings and Analysis: The baseline models, referred to as "Vanilla LM", utilize few-shot incontext learning on ChatGPT without being augmented by retrieval models. These "Vanilla LM"
1https://github.com/ysymyth/ReAct
2https://github.com/fangyihao/hgot
3https://networkx.org/
Method
FEVER
Open-SQuAD
HotPotQA
FEVER
Open-SQuAD
HotPotQA
EM
EM
F1
EM
F1
EM
EM
F1
EM
F1
Overall
Long
Vanilla LM
54.72
17.43
33.91
33.58
43.93
43.36
16.10
34.22
24.09
38.15
Retrieve-then-Read
58.35
22.51
38.81
41.20
51.21
46.90
29.66
44.60
35.77
50.05
Self-ask
53.03
18.81
34.15
43.98
54.67
46.90
20.34
35.10
42.34
59.32
ReAct
45.04
-
-
35.47
42.18
34.51
-
-
17.52
24.62
DSP
55.45
20.65
36.09
47.23
61.13
47.79
23.73
39.08
45.26
64.27
HGOT+Sampling (Ours)
61.50
22.05
36.11
45.03
56.07
53.98
28.81
42.21
37.23
53.36
HGOT+KNN (Ours)
60.53
24.10
38.32
47.37
59.48
54.87
28.81
46.27
43.07
59.77
Medium
Short
Vanilla LM
54.00
26.42
41.10
29.73
40.63
64.00
9.43
26.49
44.59
51.59
Retrieve-then-Read
59 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7260e61a-c2f6-46bd-8d9b-cf956ff2d65a | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 6 Experimental Results
53.98
28.81
42.21
37.23
53.36
HGOT+KNN (Ours)
60.53
24.10
38.32
47.37
59.48
54.87
28.81
46.27
43.07
59.77
Medium
Short
Vanilla LM
54.00
26.42
41.10
29.73
40.63
64.00
9.43
26.49
44.59
51.59
Retrieve-then-Read
59.33
28.30
43.14
35.81
45.43
66.00
11.32
30.12
50.68
57.88
Self-ask
52.00
27.04
41.05
41.89
51.92
58.67
9.43
26.53
47.30
53.92
ReAct
45.33
-
-
33.11
40.69
52.67
-
-
51.35
56.89
DSP
55.33
28.93
42.51
41.89
57.17
61.33
10.06
27.41
54.05
62.72
HGOT+Sampling (Ours)
57.33
27.67
40.25
41.89
53.33
71.33
11.32
27.38
54.05
60.87
HGOT+KNN (Ours)
61.33
31.45
42.17
46.62
59.21
64.00
13.21
28.47
51.35
59.54
models closely mirror the fundamental capabilities of ChatGPT as assessed in our factuality evaluation datasets. We observe that "Vanilla LM" generally excels at responding to short questions (or claims in FEVER), except when it comes to short Open- SQuAD questions (refer to Table 2). This exception is consistent with our dataset analysis (see Appendix B for details), where it's found that longer questions (or claims in FEVER) often demand the gathering of more facts and the undertaking of more complex reasoning. Conversely, questions of medium and short length in Open-SQuAD | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9e328fc8-a14c-4ec8-b4bb-1d00ddd166d0 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 6 Experimental Results
.35
59.54
models closely mirror the fundamental capabilities of ChatGPT as assessed in our factuality evaluation datasets. We observe that "Vanilla LM" generally excels at responding to short questions (or claims in FEVER), except when it comes to short Open- SQuAD questions (refer to Table 2). This exception is consistent with our dataset analysis (see Appendix B for details), where it's found that longer questions (or claims in FEVER) often demand the gathering of more facts and the undertaking of more complex reasoning. Conversely, questions of medium and short length in Open-SQuAD usually require identifying one or two specific pieces of knowledge. However, medium-length questions provide more context than the shorter ones.
Methods other than "Vanilla LM" include those that are augmented by retrieval mechanisms. In comparison, these retrieval-augmented approaches generally surpass the performance of "Vanilla LM", except in cases involving Self-ask and Re-
Act within the FEVER dataset (see the "Overall" section in Table 2). Additionally, the DSP method shows weaker performance in the FEVER dataset. This suggests that the ability to gather factual information is more crucial in FEVER than the capacity for multi-hop reasoning. Our approaches, HGOT+Sampling and HGOT+KNN
(with HGOT+Sampling and HGOT+KNN representing HGOT combined with the demonstration selection methods of "balanced sampling" or "k nearest neighbors", as detailed in Appendix D), are versatile and exhibit strong performance across all three datasets, regardless of whether they prioritize the skill of accumulating factual data or conducting multi-hop comprehension and reasoning.
Specifically for the FEVER
dataset, HGOT+Sampling secures the top position, with HGOT+KNN closely behind in second place.
With a 61.50% EM score, HGOT+Sampling outperforms Retrieve-then-Read, which is third, by a margin of over 3% (refer to the "Overall" section in Table 2). In every length category of the FEVER dataset, namely "Long", "Medium", and "Short", either HGOT+Sampling or HGOT+KNN achieves the highest ranking. Notably, HGOT+Sampling exceeds DSP, the strongest baseline, by more than 7% in the "Long" category and surpasses Retrieve-then-Read by more than 5% in the
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eeac4e2d-c80f-4ebc-965a-e2192b107795 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 6 Experimental Results
With a 61.50% EM score, HGOT+Sampling outperforms Retrieve-then-Read, which is third, by a margin of over 3% (refer to the "Overall" section in Table 2). In every length category of the FEVER dataset, namely "Long", "Medium", and "Short", either HGOT+Sampling or HGOT+KNN achieves the highest ranking. Notably, HGOT+Sampling exceeds DSP, the strongest baseline, by more than 7% in the "Long" category and surpasses Retrieve-then-Read by more than 5% in the
"Short"
category, where Retrieve-then-Read is the top among baselines.
In the "Medium"
category, Retrieve-then-Read competes closely with HGOT+KNN, underscoring the importance of fact-gathering over complex reasoning in FEVER, in line with findings in Appendix B. Moreover, both HGOT+Sampling and HGOT+KNN, on average, excel beyond Retrieve-then-Read's achievements in these scenarios.
Within the Open-SQuAD dataset, as detailed in Table 2's "Overall" section, HGOT+KNN stands out as the top performer, recording an Exact Match (EM) score of 24.10%, which is over 1.5% higher than its nearest competitor, Retrieve-then- Read. HGOT+KNN also leads in EM scores for both the "Medium" and "Short" categories and achieves the highest F1 score in the "Long" category of the dataset. Retrieve-then-Read demonstrates strong competitiveness in the Open-SQuAD dataset, closely matching HGOT+KNN's performance across all categories, in contrast to DSP, which shows weaker performance. This observation is consistent with our analysis in Appendix B, revealing that a large portion of the Open-SQuAD questions are designed to extract factual information, mainly asking "What", "How", and "When".
In the HotPotQA dataset, known for demanding multi-hop reasoning capabilities from models, HGOT+KNN achieved the top position in the total EM score. For the "Medium" category, HGOT+KNN recorded the highest EM score at
46.62%, surpassing the second-best performers, HGOT+Sampling, DSP, and Self-ask, by 4.73%. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0a13dcd4-2ca5-4fcf-a350-6d764d913b1f | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 6 Experimental Results
with our analysis in Appendix B, revealing that a large portion of the Open-SQuAD questions are designed to extract factual information, mainly asking "What", "How", and "When".
In the HotPotQA dataset, known for demanding multi-hop reasoning capabilities from models, HGOT+KNN achieved the top position in the total EM score. For the "Medium" category, HGOT+KNN recorded the highest EM score at
46.62%, surpassing the second-best performers, HGOT+Sampling, DSP, and Self-ask, by 4.73%.
Additionally, in this category, HGOT+KNN led in F1 score, outperforming the second-ranked DSP by over 2%. DSP proved to be a strong contender across the board in the HotPotQA dataset, closely matching the performance of our HGOT+KNN model, whereas the Retrieve-then-Read model fell short. This performance trend corroborates our dataset examination in Appendix B, confirming the necessity for models to possess robust multi-hop reasoning skills for the HotPotQA dataset.
Ablation Study: We examine the effect of the presence or absence of thought quality and retrieval quality, as well as how HGOT's performance varies with different hyperparameters. More precisely, we explore how the EM score interacts with the hyperparameters α, β, and γ as shown in Equation 1, and also how EM score relates to each element of ⃗w = (w1, w2, w3) as detailed in Equation 8. Specifically, setting α = 1, β = 0, and γ = 0 in Equation 1 is equivalent to a situation where thought quality is not considered, reducing the model to rely solely on prediction and calibration through self-consistency, as discussed in Wang et al. (2023b). Similarly, when w1 = 1, w2 = 0, and w3 = 0 in Equation 8, it simulates a condition where retrieval quality is disregarded, with the ranking of retrieved passages depending only on the search engine's score.
We include hyperparameter settings of α = 1,
β = 0, and γ = 0, alongside w1 = 1, w2 = 0, and w3 = 0, to equalize the absence of thought quality and to simulate the absence of retrieval quality when searching for the optimal hyperparameter | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c2f15ba3-6d7c-4095-bc72-d2d778283f13 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 6 Experimental Results
as discussed in Wang et al. (2023b). Similarly, when w1 = 1, w2 = 0, and w3 = 0 in Equation 8, it simulates a condition where retrieval quality is disregarded, with the ranking of retrieved passages depending only on the search engine's score.
We include hyperparameter settings of α = 1,
β = 0, and γ = 0, alongside w1 = 1, w2 = 0, and w3 = 0, to equalize the absence of thought quality and to simulate the absence of retrieval quality when searching for the optimal hyperparameter configurations for the medium-length category in the Open-SQuAD dataset. Figure 3 illustrates the EM scores associated with varying values of each hyperparameter. It is observed that the optimal EM score is attained with hyperparameter values of α = 0.2, β = 0.4, γ = 0.4, w1 = 0.2, w2 = 0.55, and w3 = 0.25, as detailed in Table 7
in Appendix F. This indicates that the best hyperparameter combination is found when both thought quality and retrieval quality are present, emphasizing the significance of introducing these qualities into the model. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eaefd788-2bd3-4ab6-8dbd-b16c6bafda02 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## 7 Conclusion
In our factuality evaluation, we chose FEVER, Open-SQuAD, and HotPotQA to assess models' abilities in both fact retrieval and reasoning. We segmented the datasets FEVER, Open-SQuAD, and HotPotQA into three categories:
"Long",
"Medium", and "Short", based on the length of their questions. This categorization emphasizes the significance of examining both extremely short and long questions, an aspect often overlooked in research. We introduced HGOT. This approach structures thoughts in a hierarchical graph format, leveraging emergent planning capabilities. It evaluates thoughts and retrieved passages by introducing metrics for thought and retrieval qualities, thereby safeguarding HGOT's capabilities in reasoning and fact-finding. Experiments show that HGOT stands out as a versatile approach, surpassing other models in FEVER and matching leading models such as Retrieve-then-Read in Open-SQuAD, and DSP in HotPotQA. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e7b59416-9a83-4b8c-a100-bc81a4e487d1 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Limitations
HGOT employs OpenAI's ChatGPT for its language model, whereas alternative models such as Google's Gemini and Meta's Llama 2 have not been explored. HGOT's evaluation is conducted using the Google Search API from SerpApi.com as its retrieval model. Its performance could vary, either improve or decline, when used in conjunction with other search engines such as Microsoft Bing, Yahoo, and Baidu. Additionally, the retrieval model for HGOT could potentially include various domain-specific data sources, for example, this could involve aligning queries with pertinent information in relational databases such as Oracle and IBM's DB2, which are widely used in the finance industry. However, the effectiveness of these variant implementations has not been examined. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0b3ef44e-ff0c-40cd-bdad-93f14b55d2c5 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## A Dataset Summary Statistics
Table 3 presents a comparison of the FEVER, Open-SQuAD, and HotPotQA datasets across nine evaluated categories in our experiments. For each category, we assess the total number of instances, as well as the maximum, minimum, and median lengths of questions, in addition to calculating the mean and standard deviation for question lengths. It is noted that the question lengths in all three categories of the Open-SQuAD dataset are generally shorter compared to the equivalent categories in the FEVER and HotPotQA datasets. Furthermore, the "Long" and "Medium" categories exhibit larger standard deviations in question length across all three datasets when compared to the "Short" categories.
| Dataset | Sentence |
|------------|------------|
| Length | |
| Split | Number of |
| Examples | |
| Maximum | |
| Length | |
| Minimum | |
| Length | |
| Median | Mean |
| Deviation | |
| Train | 1619 |
| Long | Dev |
| Test | 113 |
| Train | 2182 |
| Medium | Dev |
| FEVER | |
| Test | 150 |
| Train | 2182 |
| Short | Dev |
| Test | 150 |
| | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
04512daa-0ade-495b-8c9f-0c01ff76ed0e | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## A Dataset Summary Statistics
| 2182 |
| Medium | Dev |
| FEVER | |
| Test | 150 |
| Train | 2182 |
| Short | Dev |
| Test | 150 |
| Train | 1174 |
| Long | Dev |
| Test | 118 |
| Train | 1181 |
| Medium | Dev |
| Test | 159 |
| Open-SQuAD | |
| Train | 1181 |
| Short | Dev |
| Test | 159 |
| Train | 1504 |
| Long | Dev |
| Test | 137 |
| Train | 1628 |
| Medium | Dev |
| Test | 148 |
| HotPotQA | |
| Train | 1628 |
| Short | Dev |
| Test | 148 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7110d91b-bac6-438d-8230-85799cecf7ab | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
The FEVER dataset necessitates that the model gathers relevant background information or context regarding the subject, such as knowing what the Boeing 767 is as stated in the claim "The Boeing 767 became the most frequently used airliner for transatlantic flights between North America and Europe in the 1990s" (Table 4). Subsequently, it is required to conduct logical analysis on all the specific facts collected. Claims that are longer typically require the accumulation of more facts and knowledge, as well as the undertaking of more sophisticated reasoning. As a result, the complexity of a claim is often proportional to its length.
| Claim | Answer |
|------------------------------------------------------------|---------------------------------------------------------------|
| Length | |
| SUPPORTS | The Boeing 767 became the most frequently used airliner |
| for transatlantic flights between North America and Europe | |
| | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
da7b035b-53d9-4dd3-8194-4f157080d175 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
| The Boeing 767 became the most frequently used airliner |
| for transatlantic flights between North America and Europe | |
| in the 1990s. | |
| Long | |
| REFUTES | In Kentucky, the electric chair has been kept in operation |
| except for those whose capital crimes were committed prior | |
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d9465382-b84b-4655-b121-bfd397eb51ca | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
| In Kentucky, the electric chair has been kept in operation |
| except for those whose capital crimes were committed prior | |
| to March 31, 1998, and who choose electrocution. | |
| REFUTES | The House of the Spirits is about the life of a young lady |
| named Clara during the military dictatorship in Algeria. | |
| NOT ENOUGH INFO | One Flew Over the Cuckoo's Nest won the five major |
| Academy Awards the year it was released, the second film | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
55fb35a7-6be7-4683-8a22-61e11fe957d0 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
| One Flew Over the Cuckoo's Nest won the five major |
| Academy Awards the year it was released, the second film | |
| to do so. | |
| SUPPORTS | In 2012, Simi Valley, California, reported a higher median |
| household income than that of the nation overall. | |
| REFUTES | Planet Hollywood Las Vegas is operated by all entities except |
| an American gaming corporation. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
84bbd62d-c2f5-4a32-99dd-736ece7d55c7 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
|
| REFUTES | Planet Hollywood Las Vegas is operated by all entities except |
| an American gaming corporation. | |
| SUPPORTS | |
| Medium | |
| Chris Bosh plays in the National Basketball Association as | |
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eafec7a4-4e71-4046-967f-7bb950e93d24 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
|
| Chris Bosh plays in the National Basketball Association as | |
| a professional basketball player. | |
| NOT ENOUGH INFO | Pierce County, Washington is the location of the lowest |
| mountain in Washington. | |
| REFUTES | The Airbus A380 entered commercial service on October |
| 25, 2017. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b92663bd-6892-45af-81d6-0500cdf3f2e0 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
|
| REFUTES | The Airbus A380 entered commercial service on October |
| 25, 2017. | |
| SUPPORTS | The Nobel Prize in Chemistry was awarded to a person from |
| the Kingdom of the Netherlands. | |
| Estonia is a country. | SUPPORTS |
| Edward Cullen was created. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
710e820c-cd4a-4af2-a330-d266dd8bf36b | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
| SUPPORTS |
| Edward Cullen was created. | NOT ENOUGH INFO |
| Short | |
| Dopamine prevents neuromodulation. | REFUTES |
| Backing vocalists are performers. | SUPPORTS |
| Reanimation is a book. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0ec8f636-e122-426c-a010-f7237459839f | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B Dataset Examples And Examination B.1 Fever Data Examples And Examination
|
| Backing vocalists are performers. | SUPPORTS |
| Reanimation is a book. | NOT ENOUGH INFO | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
948249e9-3149-4594-a240-673d6c7747af | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
As demonstrated in Table 5 of the Open-SQuAD dataset, the bulk of questions are focused on "What",
"How", "When", and "Why", requiring the accumulation of factual data for answers. Additionally, questions of medium and short length typically need the collection of one or two specific pieces of information or knowledge. For instance, the question "In what geographical portion of Wales is Abercynon located?" necessitates identifying the specific location of Abercynon within Wales. Notably, mediumlength questions tend to offer more context for information retrieval compared to those in the "Short" category, such as "What is septicemia?". Thus, the inclusion of "Short" category questions in Open- SQuAD doesn't suggest they are easy to answer, especially for models that find it challenging to gather factual data. Conversely, "Long" category questions usually demand more extensive fact-finding and complex reasoning.
| Question | Answer |
|-------------------------------------------------------------|-----------------------------------------------------------|
| Length | |
| eight | What was the number of times the Denver Broncos played |
| in a Super Bowl by the time they reached Super Bowl 50? | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b2ddb4c0-c7cd-4dfb-a26a-9223c3c6e8fe | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
|
| eight | What was the number of times the Denver Broncos played |
| in a Super Bowl by the time they reached Super Bowl 50? | |
| Long | |
| public-key | cryptogra- |
| phy | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5765281b-6142-48c0-9e2a-50aaa06899af | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
|
| phy | |
| What is the application of prime numbers used in informa- | |
| tion technology which utilizes the fact that factoring very | |
| large prime numbers is very challenging? | |
| 2011 and 2012 | When did the UMC's General Board of Church and Society |
| call on all United Methodists to abstain from alcohol for | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
82c7d215-4366-412e-9132-9b8b16d302bc | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
| 2011 and 2012 | When did the UMC's General Board of Church and Society |
| call on all United Methodists to abstain from alcohol for | |
| Lent? | |
| more than 4 kilometers | What is the minimum distance between a patient's home and |
| the nearest pharmacy that allows a physician in Austria to | |
| give out medicine? | |
| over 5,100 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e6a2ab5b-67ca-42db-aae4-da92f662f491 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
|
| give out medicine? | |
| over 5,100 | Approximately how many names were signed on an online |
| petition on the Parliamentary website in response to the | |
| closing of the Musical Instruments gallery? | |
| In what geographical portion of Wales is Abercynon located? | south |
| How long has the Doctor Who Magazine been in circulation? | since 1979 |
| Medium | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
99473dbe-edd7-4c89-924d-3a948b196b2b | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
|
| How long has the Doctor Who Magazine been in circulation? | since 1979 |
| Medium | |
| economic separation | What social construct did Huguenot refugees in Canterbury |
| practice? | |
| for Lutheran views | Why were Johann Esch and Heinrich Voes executed by the |
| Catholic Church? | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
043829e8-978a-4d4f-bc91-908ddfae6e9d | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
views | Why were Johann Esch and Heinrich Voes executed by the |
| Catholic Church? | |
| Who was the first known European to visit China and return? | Marco Polo |
| What is septicemia? | a type of "blood poison- |
| ing" | |
| What shape are Plastoglobuli? | spherical bubbles | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
104212fe-f4e8-47bd-b723-806ebf5ba77a | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
|
| What shape are Plastoglobuli? | spherical bubbles |
| Short | |
| What do carotenoids absorb? | light energy |
| What is a prasinophyte? | a green algal derived |
| chloroplast | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
89d0e454-fc6f-4827-a814-5d9dea667854 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.2 Open-Squad Data Examples And Examination
| a green algal derived |
| chloroplast | |
| What was Apple Talk | a proprietary suite of |
| networking | protocols |
| developed by Apple Inc | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6b5539ac-0999-4580-9fab-6d2e40f6ea78 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
HotPotQA questions typically demand from the model not only the skill to accumulate factual data but, more importantly, the capacity for multi-hop comprehension and reasoning, particularly with long questions. For instance, to answer the question (refer to Table 6), "What is the genus of the viral disease that has symptoms such as fever, chills, loss of appetite, nausea, muscle pains, and headaches, and has a chance of causing liver damage?" the model is required to initially identify details about "the viral disease that has symptoms such as fever, chills, loss of appetite, nausea, muscle pains, and headaches" alongside information on "the viral disease that has a chance of causing liver damage", before determining the genus of the virus in question. Therefore, the degree of complexity for a HotPotQA question often correlates with its length.
| Question | Answer | Sentence |
|----------------------------------------------------------------|---------------------------------------------------------------|------------|
| Length | | |
| Province of New York | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fdcbe4ce-6cbc-4950-a40b-8352b83d2473 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| |
| Province of New York | | |
| Long | | |
| Out of two American colonies that had a series of skirmishes | | |
| and raids between 1701 and 1765 at the disputed border, | | |
| which British proprietary colony became | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0e389089-cfff-4b9b-a03e-426b8ffd39d4 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| |
| and raids between 1701 and 1765 at the disputed border, | | |
| which British proprietary colony became a royal colony on | | |
| the northeast coast of North America? | | |
| Captain John Underhill | Which Captain launched the attack which led to more ca- | |
| sualties than any other incident in the war fought between | | |
| the settlers of the nascent colony of New Netherland and the | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b932a9d2-7572-4e72-94b5-40f8952940a2 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| |
| sualties than any other incident in the war fought between | | |
| the settlers of the nascent colony of New Netherland and the | | |
| native Lenape population? | | |
| Legoland Billund | Lost Kingdom Adventure is a dark ride located at four | |
| Legoland theme parks, including which park, which is the | | |
| original Legoland park, that was opened on | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
293701ba-a06d-45bf-870c-675daa37e04a | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| |
| Legoland theme parks, including which park, which is the | | |
| original Legoland park, that was opened on June 7th, 1968? | | |
| Flavivirus | What is the genus of the viral disease that has symptoms | |
| such as fever, chills, loss of appetite, nausea, muscle pains, | | |
| and headaches, and has a chance of causing liver damage? | | |
| 1864 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
34d7d871-dd0e-4b11-8561-908a660d8157 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
|
| and headaches, and has a chance of causing liver damage? | | |
| 1864 | Until what year did the Chief of Justice of the Supreme Court | |
| that administered the presidential oath of office to Abraham | | |
| Lincoln on his first inauguration as the 16th President of the | | |
| United States hold that office? | | |
| Vyto Ruginis | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e5e77f5a-963f-4bc7-b7d9-164f88319699 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| | |
| Vyto Ruginis | The Last Run is a drama film that stars which Lithuanian- | |
| American actor? | | |
| What part of Australia is Alice River and Rupertswood in? | Victoria | |
| Medium | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
90287d47-9341-4d59-af59-0228e60f03e2 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| |
| Medium | | |
| German | What was the nationality of the composer of Chaconne in F | |
| minor? | | |
| Tai Frasier in "Clueless" | What was the breakthrough role of the actor starring in Good | |
| Boy! and was a native of Atlanta? | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fb6e2935-d76e-468d-a808-2a0cb8b7adb9 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
|
| Tai Frasier in "Clueless" | What was the breakthrough role of the actor starring in Good | |
| Boy! and was a native of Atlanta? | | |
| Who played the role of Nettie Harris in the 1985 film directed | | |
| by Steven Spielberg? | | |
| Akosua Gyamama Bu- | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f1168342-77f7-4834-a5b9-6fe37b60d52d | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| |
| Akosua Gyamama Bu- | | |
| sia | | |
| What empire was Aleksei Gen born into? | Russian Empire | |
| Romans stars which Tamil and Telugu actress? | Nivetha Thomas | |
| Short | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c8b3c243-a4d4-4fe5-93c1-00493a88eaf6 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
|
| Romans stars which Tamil and Telugu actress? | Nivetha Thomas | |
| Short | | |
| Are Ari Up and Boz Burrell both guitarists? | no | |
| Are Tetrastigma and Spruce both types of plants? | yes | |
| What did Karan Kapoor's maternal grandfather deliver? | Shakespeare | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e08bd773-092a-40b1-8d05-53422e5df549 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## B.3 Hotpotqa Data Examples And Examination
| |
| What did Karan Kapoor's maternal grandfather deliver? | Shakespeare | perfor- |
| mances | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0ba7a6bb-8748-4078-abf4-46cee1897e41 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## C Prompt And Response Examples C.1 Prompt And Response Of The "Plan" Procedure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PROMPT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ...................................user................................... Sketch a plan to answer the following question with the provided context. List only
�→
the essential steps which can be answered by search engines. Express each
�→ step as a standalone search question. Highlight interdependencies if any. �→ Higher number steps can depend on lower number steps, while the reverse is �→ not possible.
--- Follow the following format.
## Context: ${Sources That May Contain Relevant Content. E.G., [1] Passage 1. [2] Passage 2.
�→ [3] Passage 3.}
Question: ${the question to be answered}
## Plan: Step 1: ${A Standalone Search Question. E.G., ...?} Step 2: ${A Standalone Search
�→ question. e.g., ...?} ... Step n: ${a standalone search question. e.g., �→ ...?}
Dependencies: ${interdependencies among multiple steps. e.g., Step ... depends on
�→ Step ... .}
---
## Context: [1] Steve Masiello | (Born September 2, 1977) Is An American College Basketball
�→ coach and a former player. He most recently served as men's head coach at �→ Manhattan College.
[2] Jaspers' new coach hopes to recapture MC's past glory | Manhattan College
�→ introduced Steve Masiello, center, who will take over as the Jaspers' new �→ men's basketball coach.
[3] Steve Masiello (St. John's Red Storm) | Steve Masiello (born September 2, 1977)
�→ . Current position: Associate head men's basketball coach. Current team: St.
�→
John's Red Storm (Head ...
Question: Which of the Manhattan Jaspers basketball team head coach was born in
�→ September 2, 1977?
## Plan: Step 1: Who Is The Head Coach Of The Manhattan Jaspers Basketball Team? Step 2:
�→ When was the head coach born?
Dependencies: Step 2 depends on Step 1.
---
## Context: [1] Phil Cutchin | Phil Cutchin (September 9, 1920 - January 7, 1999) Was An
�→ American football player and coach. He | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ece95d27-a857-448f-a988-847b83bc3b83 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## C Prompt And Response Examples C.1 Prompt And Response Of The "Plan" Procedure
.
�→
John's Red Storm (Head ...
Question: Which of the Manhattan Jaspers basketball team head coach was born in
�→ September 2, 1977?
## Plan: Step 1: Who Is The Head Coach Of The Manhattan Jaspers Basketball Team? Step 2:
�→ When was the head coach born?
Dependencies: Step 2 depends on Step 1.
---
## Context: [1] Phil Cutchin | Phil Cutchin (September 9, 1920 - January 7, 1999) Was An
�→ American football player and coach. He served as the head football coach at �→ Oklahoma State ...
[2] Former OSU Football Coach Cutchin Dies | In life, Phil Cutchin captained a Paul
�→
"Bear" Bryant football team, was an Army officer in two wars, a football
�→ coach and a stock broker.
[3] Phil Cutchin | American Football Database | Fandom | Phil Cutchin (September 9,
�→
1920 - January 7, 1999) was an American football player and coach. He
�→ served as the head football coach at Oklahoma State ...
Question: Coach Phil Cutchin served as the head football coach at Oklahoma State-
�→ University-Stillwater, which was originally known as what?
## Plan: Step 1: What Was Oklahoma State University-Stillwater Originally Known As? Step 2:
�→ When did Phil Cutchin serve as the head football coach at Oklahoma State �→ University-Stillwater?
Dependencies: Step 2 depends on Step 1. ---
## Context: Todd Boehly | Todd Boehly Is An American Businessman And Investor. He Is The Co-
�→ founder, chairman, chief executive officer and controlling member of �→ Eldridge Industries, ...
Question: What was Todd Boehly's former position at the firm where Mark Walter is
�→ the CEO?
Plan:
----------------------------------- RESPONSE -----------------------------------
----------------------------------- CHOICE 0 -----------------------------------
...................................assistant...................................
Step 1: What is the name of the firm where Mark Walter is the CEO? Step 2: What was
�→
Todd Boehly's former position at the firm where Mark Walter is the CEO?
Dependencies: Step 2 depends on Step 1.
## | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1afe4375-4019-4698-8386-82c61a1983f3 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## C Prompt And Response Examples C.1 Prompt And Response Of The "Plan" Procedure
chief executive officer and controlling member of �→ Eldridge Industries, ...
Question: What was Todd Boehly's former position at the firm where Mark Walter is
�→ the CEO?
Plan:
----------------------------------- RESPONSE -----------------------------------
----------------------------------- CHOICE 0 -----------------------------------
...................................assistant...................................
Step 1: What is the name of the firm where Mark Walter is the CEO? Step 2: What was
�→
Todd Boehly's former position at the firm where Mark Walter is the CEO?
Dependencies: Step 2 depends on Step 1.
## C.2 Prompt And Response Of The "Self-Reflect" Procedure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PROMPT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ...................................user................................... Highlight interdependencies among the steps below if any. Higher number steps can
�→ depend on lower number steps, while the reverse is not possible.
--- Follow the following format. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
30791b52-b44a-43f9-97db-baddcb59be6f | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Plan: Step 1: ${A Standalone Search Question. E.G., ...?} Step 2: ${A Standalone Search
�→ question. e.g., ...?} ... Step n: ${a standalone search question. e.g., �→ ...?}
Dependencies: ${interdependencies among multiple steps. e.g., Step ... depends on
�→ Step ... .}
--- | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5e22253b-a842-4013-b99c-fc0b0d06023b | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Plan: Step 1: Who Is The Head Coach Of The Manhattan Jaspers Basketball Team? Step 2:
�→ When was the head coach born?
Dependencies: Step 2 depends on Step 1. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ad11e2b3-f59a-4e0a-83d9-8921aacafe08 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## --- Plan: Step 1: What Was Oklahoma State University-Stillwater Originally Known As? Step 2:
�→ When did Phil Cutchin serve as the head football coach at Oklahoma State �→ University-Stillwater?
Dependencies: Step 2 depends on Step 1. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
bdc6499b-ef52-4152-8675-ababac366eb8 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## --- Plan: Step 1: What Is The Name Of The Firm Where Mark Walter Is The Ceo? Step 2: What Was
�→
Todd Boehly's former position at the firm where Mark Walter is the CEO?
Dependencies:
----------------------------------- RESPONSE -----------------------------------
----------------------------------- CHOICE 0 -----------------------------------
...................................assistant...................................
Step 2 depends on Step 1. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e356844c-ee66-41d5-9f0e-58efdc55f58a | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## C.3 Prompt And Response Of The "Formalize" Procedure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PROMPT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ...................................user................................... Express the dependencies in formal language by giving the descriptions below. --- Follow the following format. Descriptions: ${descriptions of dependencies} Dependencies: ${e.g., If Step 2 depends on Step 1, then write Step 1 -> Step 2; If
�→ Step 2 and Step 3 depend on Step 1, then write Step 1 -> (Step 2 and Step 3) �→ ; If Step 3 depends on Step 1 and Step 2, then write (Step 1 and Step 2) ->
�→ Step 3}
---
Descriptions: Step 2 depends on Step 1.
Dependencies:
----------------------------------- RESPONSE -----------------------------------
----------------------------------- CHOICE 0 -----------------------------------
...................................assistant...................................
Step 1 -> Step 2
## C.4 Prompt And Response Of The "Rewrite" Procedure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PROMPT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ...................................user................................... Rewrite the last question in a standalone manner by giving the answers to previous
�→ questions. Do not consider answers that were not specified. Only show the �→ last question after the rewrite.
--- Follow the following format. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c7f457bd-aa34-48b9-947e-7621c4347e1b | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Context: ${Previous Questions And Answers}
Rewrite: ${the last question after the rewrite} --- | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e9994fbf-d023-4102-98fd-8e9719c0952f | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Context: Step 1: Who Is The Head Coach Of The Manhattan Jaspers Basketball Team? Answer:
�→ John Gallagher. Step 2: When was the head coach born?
Rewrite: When was the head coach of the Manhattan Jaspers basketball team born?
--- | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
afef312d-0e7d-4692-8611-8719be5575f6 | # Hgot: Hierarchical Graph Of Thoughts For Retrieval-Augmented In-Context Learning In Factuality Evaluation
## Context: Step 1: What Was Oklahoma State University-Stillwater Originally Known As? Answer:
�→ Oklahoma Agricultural and Mechanical College. Step 2: When did Phil Cutchin �→ serve as the head football coach at Oklahoma State University-Stillwater?
Rewrite: When did Phil Cutchin serve as the head football coach at Oklahoma State
�→ University-Stillwater?
--- | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09390v1.md",
"file_path": "paper_data/2402.09390v1.md",
"file_size": 88803,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |