doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
2286271f-22ee-49ed-9524-83cb4bd2882a
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 3.2 Design Requirement R2. Present agents' transition of physical location and thought content. The physical and mental changes of agents play a vital role in driving and reflecting the evolution of the entire LLMAS. Nevertheless, currently, users can only stare at the re-playable recording to see if there is a location transition of the agent and check the raw execution log to find when the agent starts to think about a certain idea, which is inefficient and error-prone. Therefore, the system should provide visual emphasis on agents' transition of location and highlight the time points the agent starts to think about a topic the user wishes to explore. R3. Underscore possible causes of agent behaviors. When users become interested in a certain behavior of the agent, they usually want to investigate the cause or consequence of this behavior. However, an agent's behavior can be influenced not only by its current perception and thoughts but also by the memory of its past behavior. It is tedious and unreliable for users to switch the replayable recording back and forth to locate the cause of the behaviors of certain agents. Therefore, the system should provide a mechanism to mine the possible causes of an agent's behaviors and highlight them for users' investigation. R4. Explicate the context of LLM invocation. LLM plays a crucial role as the core of the LLMAS, which is frequently invocated to make cognitive decisions for agents. To inform the background for making a certain decision, the preceding contextual information is organized in a specific manner with a customized template and then sent as a prompt to the LLM. Therefore, to help users understand how and why a decision is made by an agent, the system should present the decisions made by LLM and explicate the context of its invocation. Moreover, it is desirable to provide visual enhancement to help users trace how the context information is collected from previous agent behaviors.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d52b94a2-2639-4bb2-b0b0-f046055e280b
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 3.3 Approach Overview In alignment with the aforementioned design requirements, we designed *AgentLens*, a proof-of-concept system dedicated to visualizing agent behaviors during the LLMAS evolution. The workflow of our approach is depicted in Fig. 3. Users can utilize logging codes to log their LL- MAS evolution process and capture raw events executed by agents. Based on these raw events, we establish a hierarchical structure to summarize agent behaviors in different granularity and trace possible causal relationships among their behaviors (section 4). A user interface and a series of interactions are provided to support interactive exploration and analysis of the agent behaviors in LLMAS (section 5).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2e8e436e-5ff8-46f1-98f2-07ef7366a92b
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4 Behavior Structure Establishment In this section, we introduce a pipeline designed to establish the hierarchical behavior structure from raw events generated during the evolution of LLMAS. It facilitates the generation of structured data for visualization, achieved via summarization and causal analysis of agent behaviors. As shown in Fig. 4, the pipeline consists of three steps: (A) processing the raw events and organizing them into behaviors based on the common architecture shown in Fig. 2 (R1), (B) summarizing these behaviors and segmenting them in accordance with their semantic implications. (R1, R2), and (C) tracing the cause between these behaviors by analyzing the correlations among original events (R3, R4).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5c85d8bb-0671-4222-be8f-22dd95e25709
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4.1 Behavior Definition During the evolution of LLMAS, multiple raw events are generated, creating large, often chaotic, and obscure text logs with the scaling of agent populations. To streamline downstream analysis and visualization efforts, we defined agent behaviors as structured representations that encapsulate the sequence of raw events (R1). Drawing upon the system state adopted by most LLMAS architectures, we denote the timeline T to represent the states and events of agents at various time points within environments. For each time point t on the timeline, we can define the tuple Tt as follows: Tt = ⟨etβˆ’1, οΏ½ atβˆ’1[i], οΏ½ st[i]⟩ (1) where etβˆ’1 denotes the environment state at the previous point t βˆ’ 1 before time t. atβˆ’1[i] represents the agent state of the i-th agent at t βˆ’ 1, encompassing its position within the environment as well as individual status indicators such as hunger levels, mood values, etc. In various LLMAS, at encompasses a diverse array of attributes. st[i] denotes the set of indivisible οΏ½ ot,i[k] (operation informed in section 3.1) executed by the i-th agent at t, and k denotes the operation index. Following these definitions, the indivisible minimal events occurring within an LLMAS are transformed into operations o, which are bound to a specific time point, agent, and task(*e.g.* perceive, think, act). However, these low-level events can be irrelevant or redundant for high-level analysis targets. For a specific agent, there may exist hundreds of events at a single time point t, which imply factual (e.g. duplicate segments generated by prompt construction) and semantic (*e.g.* repeated biased interpretations of the same observation) duplications. To address these problems, we synthesize events on T for each agent into their behaviors: t∈[t0,t1] st,i (2) Bi,t0Β·Β·Β·t1 = οΏ½ It refers to the set of operations performed by the i-th agent across the subsequence [t0, t1] within the temporal series T.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
646619ce-683a-4353-b9d0-dcade1f69ed4
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4.2 Behavior Summarization In various LLMAS, operations manifest in different forms, such as text, images, and even physical behaviors in the factory environment. Meanwhile, new behaviors of those agents are continuously generated as T is increasing. The multiplicity of manifestation and the extensive aggregation of behaviors can obscure the visualization system's interpretation, thereby impeding the exploration of an agent's internal causality. Therefore, we propose a behavior summarization method. As shown in Fig. 5, we (1) outline behaviors that encapsulate a singular time point into a succinct description (Fig. 5, A β†’ B ), (2) utilize text embedding to capture underlying semantics within the behavior(Fig. 5, B β†’ C ), (3) utilize a change point detection method to divide the sequence of behaviors and abstract each sub-sequence of behavior(Fig. 5, C β†’ D ). Ultimately, we can summarize a multitude of small behaviors into several noteworthy behaviors with segmented timelines. Description Generation: We incorporate an external text summarization model, which acts as a standalone LLM agent that operates independently of LLMAS. All annotated descriptions are concatenated to form a comprehensive model input (*i.e.* prompts for LLM). Given this long text sequence as input, the summarization model generates a succinct behavior description, significantly reducing the information length while maintaining the original meaning (shown in Fig. 5, B , from Prompt to *Response*). Concurrently, we prompt that the summarization model yields a highly abstract description of the behavior, employing both textual and emoji symbols. Textual descriptions serve as the foundation for the forthcoming embedding model and emoji symbols are conceived to facilitate subsequent visualization. Behavior Embedding: We further utilize all summarized behavior descriptions, embedding them to better grasp the latent semantics, including the inherent similarities and hierarchical relationships. To maximize the efficiency of the encoding schema, we adopt the text-embedding model2 pretrained on large-scale internet text data, renowned for its superior performance, cost-effectiveness, and simplicity of use. The summarized behavior descriptions are then each encoded into a 1536-dimensional vector, constituting the sequence Eagent for each agent. With these powerful embeddings, we can uncover the semantic similarity of a single behavior, thereby unlocking the potential to tackle a myriad of complex text sequence analyses. Timeline Segmentation: Considering the data characteristics of the embedding sequence e and our design requirements, we employ the Window-based change
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6fadaadd-4a23-4c45-8502-a222b11570d1
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4.2 Behavior Summarization we adopt the text-embedding model2 pretrained on large-scale internet text data, renowned for its superior performance, cost-effectiveness, and simplicity of use. The summarized behavior descriptions are then each encoded into a 1536-dimensional vector, constituting the sequence Eagent for each agent. With these powerful embeddings, we can uncover the semantic similarity of a single behavior, thereby unlocking the potential to tackle a myriad of complex text sequence analyses. Timeline Segmentation: Considering the data characteristics of the embedding sequence e and our design requirements, we employ the Window-based change point detection (WIN) algorithm [80] with the cosine distance measure to segment the sequence. This approach is suitable for realtime or streaming data contexts, as it allows for incremental updates in response to the arrival of new data and exhibits insensitivity to short-term and frequent fluctuations. Firstly, to compare two embedding vectors ex and ey (ex, ey ∈ Eagent) with dimension d = 1536, we use the cosine similarity k*cosine* : Rd Γ— Rd β†’ R (shown in eq. (3)) as the kernel function [81] , where ⟨·, ·⟩ and *βˆ₯ Β· βˆ₯* are the Euclidean scalar product and norm respectively: $$k(e_{x},e_{y}):=\frac{\langle e_{x}|e_{y}\rangle}{\|e_{x}\|\|e_{y}\|}\tag{3}$$ Then we recall the cost $c(\cdot)$ deriving from $k(\cdot,\cdot)$ as eq. (4), where $e_{a\ldots b}$ is the subsequence $\{e_{a+1},e_{a+2},\cdots,e_{b}\}\subseteq E$: $$c(e_{a\ldots b})=\sum_{t=a+1}^{b}k(e_{t},e_{t})-\frac{1}{b-a}\sum_{s,t=a+1}^{b}k(e_{s},e_{t})\tag{4}$$ WIN utilizes two sliding windows that traverse the data stream. By comparing the statistical properties of the signals within each
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e97a5586-7e61-4c19-853c-d27c9ff7d0cb
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4.2 Behavior Summarization {e_{a+1},e_{a+2},\cdots,e_{b}\}\subseteq E$: $$c(e_{a\ldots b})=\sum_{t=a+1}^{b}k(e_{t},e_{t})-\frac{1}{b-a}\sum_{s,t=a+1}^{b}k(e_{s},e_{t})\tag{4}$$ WIN utilizes two sliding windows that traverse the data stream. By comparing the statistical properties of the signals within each window, a discrepancy measure is obtained based on the cost function $c$: $$d(e_{u\ldots v},e_{v\ldots w})=c(e_{u\ldots w})-c(e_{u\ldots v})-c(e_{v\ldots w})\tag{5}$$ The discrepancy d means the cost gain of splitting the sub-sequence e*u..w* at the index v. If the boundary v is a change index within the window *u..w*, the discrepancy d will be significantly higher. After a sequential peak search of d, we have a series of time points tβˆ— 1 < tβˆ— 2 < ... < tβˆ— K. Certain features of the embedding sequence change suddenly at these points. We utilize the abstraction of ti, encompassing both textual and emoji symbol descriptions, to aggregate the behaviors of the agent from ti to ti+1. Fig. 6 provides an illustrative example of the timeline segmentation process. Here we try to segment the timeline of a writer agent in the Reverie environment. The agent's entire morning schedule is shown in ( Fig. 6, A ), spanning from midnight to noon, encompassing 4000 time points (0β†’4000) on the timeline. To facilitate an intuitive understanding of the segmentation result, we conducted principal component analysis (PCA) on the embedding of behavior at each time point, and used the y-axis to encode the values of the primary PCA components, resulting in the orange line plot presented in Fig. 6. As we can see in ( Fig. 6, A ), by applying the segmentation algorithm (with N=5 as an example), this period is summarized into five main behaviors ("sleep and plan", "revisiting
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a32d6e1-2cfc-4186-b55a-bf7567e86fd2
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4.2 Behavior Summarization spanning from midnight to noon, encompassing 4000 time points (0β†’4000) on the timeline. To facilitate an intuitive understanding of the segmentation result, we conducted principal component analysis (PCA) on the embedding of behavior at each time point, and used the y-axis to encode the values of the primary PCA components, resulting in the orange line plot presented in Fig. 6. As we can see in ( Fig. 6, A ), by applying the segmentation algorithm (with N=5 as an example), this period is summarized into five main behaviors ("sleep and plan", "revisiting previous work", etc.). Moreover, if we re-apply the timeline segmentation algorithm to the "dedicated writing" behavior, which spans time points 2251β†’3177 (Fig. 6, B ) on the timeline, we can further divide it into five sub-behaviors ("gather ideas", "brainstorm", etc.). Note that all these subbehaviors can be considered as "dedicated writing", while exhibiting more subtle distinctions among them. Another observation to note is that in the line plot formed by PCA principal component values, there are some peaks. These peaks occur because the agent executes specific operations at this time point, such as generating new memories or perceiving new objects. However, these operations do not have a lasting impact on the agent's ongoing behavior. Therefore, they are usually regarded as tiny behaviors contained in their parent behavior.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ccc5a7ed-eacd-4379-a814-a86a3f338442
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4.3 Cause Tracing Within a complex timeline, any agent event is influenced by both its internal memory and interactions with the external environment. By tracing the causal factors of these events, users can gain valuable insights into agent behaviors (R3) and LLM invocation for decision-making (R4), thereby improving the credibility and interpretability of LLMAS. Existing works [10] primarily rely on log debugging to explicitly reveal the origins of agents' operations. However, these methods place an additional cognitive burden on users due to the need for manual tracing and often fail to capture implicit causal relationships. For instance, current thinking can be influenced by observations over a long time steps. To efficiently trace the behavior causes, we propose a two-fold provenance tracing method to mine the causal relationships between underlying events within the behaviors. Explicit Causes: It refers to the distinct and observable causal relationships that can be directly discerned from raw event logs, explicitly delineating the direct influence relationships between operations. For example, in open-source agent creation frameworks like Langchain [34] and Agent- Verse [17], mechanisms have been implemented to index attributes of agent memory, facilitating direct backtracking to the relevant source operations upon the invocation of an agent's memory. When such explicit causal chains are completed in LLMAS, users can thus obtain these records through raw event logs and transmit them to *AgentLens*. AgentLens utilizes these logs as input to facilitate the analysis of downstream tasks for users. Implicit Causes: Throughout the evolution of LLMAS, the agents' invocations of historical operations are not always documented, but rather are expressed through complex intermediate variables or latent patterns within the program. To capture these implicit causal relationships, we conduct relevance detection based on the text similarities (as in eq. (3)) between the textual log of these operations themselves, thereby revealing the latent connections between events. To strike a balance between uncovering potential causal relationships and preventing information overload for users, we define a similarity threshold Ξ΄. For a certain operator ores at time point j, if the similarity between it and another operator osrc at time point i (*s.t.* i ≀ j) exceeds Ξ΄, we consider osrc as one of the potential causes of ores. After the extraction of both explicit and implicit causes among operations is completed, we have ascertained every possible pair < osrc, ores >. The connections between operations can be elevated to the connection between the corresponding behaviors in a bottom-up fashion, in
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2ef3fb2d-992f-4c9d-ac1f-504d5e8c0d0e
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 4.3 Cause Tracing balance between uncovering potential causal relationships and preventing information overload for users, we define a similarity threshold Ξ΄. For a certain operator ores at time point j, if the similarity between it and another operator osrc at time point i (*s.t.* i ≀ j) exceeds Ξ΄, we consider osrc as one of the potential causes of ores. After the extraction of both explicit and implicit causes among operations is completed, we have ascertained every possible pair < osrc, ores >. The connections between operations can be elevated to the connection between the corresponding behaviors in a bottom-up fashion, in accordance with the definition of behavior outlined in section 4.1.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
149de23c-aa65-4031-894e-de47389f6f88
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 5 User Interface The user interface is composed of three views. The Outline View (Fig. 1, A ) visualizes how the agents' activity, interaction, and environment change over time, allowing users to analyze the evolution process of the LLMAS. Once the user becomes interested in certain behaviors of any agent, they can check its details and trace its cause from the Agent View (Fig. 1, B ). During the exploration process, the visualization of LLMAS will synchronously switch to the corresponding agent and time point to support intuitive perception and verification in *Monitor View* (Fig. 1, C ).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a0bca2b7-c1c5-4e4a-ba68-3078f250c077
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 5.1 Outline View Outline View serves as a springboard for exploration, providing a suitable generality of information (R1) to assist users in efficiently discovering noteworthy patterns or behaviors of interest during the evolution of the LLMAS. Agent Timeline Summarization: Every agent has its individual behaviors (*e.g.* what it is perceiving, thinking, and acting) at each time point. When users double-click on the view, all selected agent curves will be automatically summarized into N (we set N = 10 during experiments) segments using the behavior summarization algorithm proposed in Section 4.2. Users can click the start of a segment to check details about what is happening during this period of timeline. If users desire a more granular behavior representation (R1), they can zoom in to a specific region by scrolling the mouse wheel. The system will then re-summarize the timeline based on the currently visible area (Fig. 1, A1 ). Agent Interaction Analysis: Each agent in the Outline View is represented as a uniquely colored curve, whose xaxis encodes the system time point and y-axis encodes the location of the agent, depicting the transition of the location of each agent (R2). When several agents are in the same time and location, they can have interactions (*e.g.* conversations, collaborations, or conflicts) with each other. Since these interactions usually play a crucial role in affecting the LLMAS's evolution, we highlight them by filling the area among the corresponding segment of agent curves. Users can click an interaction area of interest to check the integration details (Fig. 1, A2 ). Drawing inspiration from previous work of storytelling [82], [83], we enforce agent curves to get closer if there is an interaction among them. Agent Memory Search: Sometimes users want to conduct exploration about when and how the agents start to have thoughts about a specific topic (R2). Therefore, we provide a search box in the top right corner of the view, allowing users to add keywords related to the topic they want to explore. Whenever a keyword is added, the points on the agent curves corresponding to time points associated with relevant memory will be highlighted (Fig. 1, A3 ).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f66f565e-43a3-4c5e-9a06-452ba50f24d8
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 5.2 Agent View When users notice a specific phenomenon or behavior from the *Outline View* and wish to further explore it, they can click on the corresponding time point on an agent curve to access more details (R1) in the *Agent View*. Agent Characteristic: A complex LLMAS typically contains agents with different characteristics. For example, agents might be assigned different roles and goals, which are usually realized through prompt engineering or LLM fine-tuning. Since these details are important for users to understand and infer an agent's behavior, we display them on the left panel of the *Agent View* (Fig. 1, B1 ). Time Point Revealing: On the right panel of Agent View, we provide users with a timeline (Fig. 1, B2 ) to help investigate the behavior of the selected agent during this period of time, which is a detailed counterpart of the agent curve in *Outline View*. Users can click a time point icon to reveal descriptions (summarized using the method shown in Fig. 5, A - B ) and the task-level events performed by this agent at this time point (R1). They can click a task icon to further reveal the operators involved in performing this task (R1). As discussed in Section 3.1, the operators can be classified into Environmental Operations, Memory Operations, and Decision Operations based on the target. Therefore, we use different icons to represent operators of different type: If the user clicks , a description panel will pop up to show the invocation context of LLM to make the decision (Fig. 1, B3 ) (R4); If the user clicks , a description panel will pop up to show the texts stored into the memory at this operation; If the user clicks , a description panel will pop up to show what the agent is perceiving from or act on the environment. Cause Tracing: In addition to obtaining detailed behavioral information about agents, users also need to locate and analyze the reasons behind these agent behaviors. Whenever the user clicks an operator icon in the *Agent View*, the system will utilize the cause trace method described in Section 4.3 to find previous operators that potentially have an intrinsic relationship with the current operation and highlight their corresponding time point on the Agent View (R3). We use edges with orange color to connect the selected operator and their predecessors. Since the agent behaviors could be affected by previous operations a long time ago, we provide users with a mini-map to visualize the point
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
eb09754a-1e76-46fc-ba69-9b8fd4330658
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 5.2 Agent View Cause Tracing: In addition to obtaining detailed behavioral information about agents, users also need to locate and analyze the reasons behind these agent behaviors. Whenever the user clicks an operator icon in the *Agent View*, the system will utilize the cause trace method described in Section 4.3 to find previous operators that potentially have an intrinsic relationship with the current operation and highlight their corresponding time point on the Agent View (R3). We use edges with orange color to connect the selected operator and their predecessors. Since the agent behaviors could be affected by previous operations a long time ago, we provide users with a mini-map to visualize the point of the current operation and its related predecessors across the whole timeline (Fig. 1, B4 ) (R1). Based on this minimap, users can switch back and forth between the cause and result across the timeline more easily.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0fec6eea-faef-47e2-b339-4ba89b39ea94
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 5.3 Monitor View LLMAS typically provides a graphical representation of the dynamic simulation. It could be re-playable for 2D video or 3D, contingent upon the LLMAS evolution logs provided by the user for *AgentLens*. This visual representation transforms abstract simulation data into perceptually friendly visual elements, which helps users understand LLMAS and verify their analysis more intuitively. However, manually switching between different locations and time points can be tedious and interrupt the user's analysis flow. Therefore, we provide the *Monitor View* to support fluent adjustment of the panoramic visualization of LLMAS (Fig. 1, C ) based on users' current focus and demand for context. Focus Switching: Whenever the user clicks a time point on agent curve from the *Outline View* or a time point from the *Agent View*, the *Monitor View* will automatically switch to the location of that agent at that time point, providing a corresponding concrete visualization to complement the other two views (R1). Context Revealing: The *Monitor View* also supports spatial and temporal context revealing to help users better comprehend the current focus point. As for the spatial context, the user can scroll the mouse wheel to adjust the level of scope, ranging from a macroscopic view of the entire LLMAS to a microscopic focus on a single agent. As for the temporal context, whenever the user changes the focus point from time point A to time point B, they can right-click the mouse to replay a fast-forward recording of that period of time in the *Monitor View*.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d88bb530-7159-4a53-bddd-fa8b8cda04a0
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 6 Usage Scenarios 6.1 Scenario A: Information Diffusion This case demonstrates how our system helps users understand the patterns of agent behaviors in LLMAS. In the initialization phase, the user adds the information "Organize Valentine's Day party at Hobbs Coffee on the evening of February 14th" to the characteristic (Fig. 7, A1 ) of the agent Isabella Rodriguez (IR) and wishes to observe the evolution of the system on February 13th. To focus on the theme of the party, the user searches for the occurrence of the keyword "*party*" (Fig. 7, B ) in the agent's memory and follows IR's timeline for observation. The user discovers that the message primarily spreads during IR's conversations with others. Furthermore, the user finds the "*party*" memory highlight surfacing in the conversation between Ayesha Khan (AK) and John Smith (JS). Upon examining their dialogue (Fig. 7, B1 ), it is revealed that the message is from AK to JS while there is no prior knowledge of the "*party*" message in AK's settings (Fig. 7, A2 ). In order to delve into the underlying cause, the user selects the time point when AK initiates the conversation with JS, employing the *Agent View* to obtain detailed insights (Fig. 7, C ). The user expands the time point (Fig. 7, C1 ) and traces the cause of one of the decision operations (Fig. 7, C2 ). It is highly probable that AK's decision to discuss "*party*" with JS has its historical roots in a conversation between IR and AK that took place some time ago. Finally, the user reverts to the *Outline View*, confirming that a conversation concerning the "*party*" has indeed occurred between IR and AK (Fig. 7, B2 ), during which IR extends an invitation to AK to participate in the party preparation. With the assistance of *AgentLens*, the user successfully pinpoints an instance of information diffusion from a primary disseminator IR to a secondary one AK, then gradually diffusing towards other agents. From the *Agent View*, users discover that with the increase in both secondary propagators and the number of conversations related to "*party*", the speed of "*party*" diffusion throughout the small town significantly accelerates.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f9776a3f-4a10-475e-ac0b-b69dc397d61f
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 6.2 Scenario B: Unexpected Social Patterns In this scenario, the user uncovers an unexpected pattern of information diffusion: eavesdropping. During the observation of the "*party*" propagation process (Fig. 8, A1 ), the user discovers that Sam Moore (SM) forms relevant memories without engaging in any direct conversation. According to the event summary, SM is in the process of writing his novel when this memory is formed. The user hovers on this memory point about the "*party*", learning that the memory formed by SM at this time is "IR and Giorgio Moore (GM) are talking about Valentine's Day party". From the visual representation, the user observes that IR, SM, and GM are in the same room at this moment, a fact that is corroborated by the *Monitor View* (Fig. 8, A2 ). The user infers that SM comes to know about the "*party*" by eavesdropping on others' conversations. The user seeks to investigate why SM does not join the conversation. The user expands the corresponding time point(Fig. 8, B ) in the *Agent View* and identifies the Decision Operation that determines SM's choice not to participate in the discussion. The prompt dispatches to the LLM incorporated agent settings pertaining to SM, like "*SM is IR's friend*" and "*Sam is not very familiar with GM*", in addition to the immediate observations made by SM, such as "IR and GM are presently engaged in a conversation" among other pieces of prompt input. It is the response returned by the LLM, based on the prompt, making the decision for SM's subsequent action that he determines not to join the conversation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e8a7e1eb-bfe9-4fa4-b4e6-99ffb992b5e1
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7 User Evaluation We conducted a user study to evaluate the performance of AgentLens in enhancing LLMAS analysis. The study was specially designed to assess the comprehensive efficiency, effectiveness, and usability of the system. We also examine the analytical support provided by our system compared to a baseline system, which replicates the visual approach in existing LLMAS works.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b639d143-053c-465d-ba5b-9ddfb933cb81
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.1 Participants To prevent participants from having prior knowledge of the system before evaluation, we recruited 14 new participants (denoted as P1-P14) from a local university who had not been involved in the design requirements phase of this study, thereby enhancing the assessment validity and the results generalizability. These participants have diverse academic backgrounds, with most being undergraduate and graduate students from fields such as computer science, software engineering, and sociology. Some of them are developers with a high level of expertise in LLMAS, while others only have had direct interaction with LLMAS.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c8aa7617-6e5c-43a4-b782-07181f08655a
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.2 Baseline Systems A baseline system3 has been set up for direct comparison with our proposed system. Both the baseline system and our system utilize the log data generated by Reverie [10], which records the interactions and memory logs of agents within the system during the simulation process. The baseline provides a view for replaying past events with plain text descriptions of agent settings and behaviors, which simulates a typical LLMAS panoramic visualization. Firstly, it features a monitoring interface that uses a flat map as the background. This allows users to replay and observe the agent positions and behavior descriptions at different time points through a timeline. Secondly, the system offers a textual representation of the current events for each agent, including the agent's location, the action in progress, and the ongoing dialogue (if any). Finally, the system also provides a pure textual display of all events in each agent's evolutionary process, encompassing the agent's personality, complete memory records, and event sequences. These features enable users to understand the agent behaviors and status and delve into their evolutionary process.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4d78155a-1fc5-4896-98b9-75d7b6880b74
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.3 Procedure And Tasks Introduction (10 min): Initially, we provided a concise overview of the research, including the motivation and methodology. We then collected basic personal information from them, including their gender, age, and occupation. In addition, we obtained authorization to record their behaviors during the subsequent task analysis. Finally, we describe the characteristics of the individual views in both baseline and *AgentLens* in detail and demonstrate their practical use in a specific scenario. Task-based analysis (40 min): In this stage, participants were required to undertake 2 groups of analytical tasks (refer to Figures 9 and 10), designed to evaluate the system's overall effectiveness and usability. Participants were required to fulfill tasks for each system, with the duration and accuracy of task completion being recorded. To obviate the potential for participants to replicate responses through memorization [84], the sequence in which the two systems were presented was randomized. Each task was uniquely tailored for both systems while ensuring an equivalent level of challenge. Semi-structured interview (30 min): To enhance the evaluation of the method and interface efficacy, we utilized the five-point Likert scale in an 8-item questionnaire. Additionally, we employed the System Usability Scale (SUS) [85] to evaluate the usability of *AgentLens*. Participants were asked to rate each question from 1 (strongly disagree) to 5 (strongly agree) to gauge their agreement levels. During the questionnaire process, we encouraged participants to speak freely to uncover the reasoning behind their ratings.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b5dbc1db-861e-4713-9393-7bef3f57b7cf
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4 Task Completion Analysis For the task-based analysis, we conducted a quantitative comparison between *AgentLens*nd the baseline, focusing on accuracy and task completion time. We developed two distinct groups of evaluation tasks to assess the efficacy of 2 systems for the analysis of agent behaviors (Fig. 9) and the identification of emergent phenomena arising from such behaviors (Fig. 10).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
622bced2-113b-4b9a-9b47-41fdb0a2d80a
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis T1 - T6 in Fig. 9 are designed with elicit concise answers, requiring participants to rapidly comprehend the fundamental characteristics and behaviors of agents. Based on the analytical target, we categorize this set of tasks into 3 classifications. Participants exhibit varying levels of accuracy and time expenditure across tasks, however, there was a notable improvement in task accuracy (p = 1.2e βˆ’ 3) and reduction in time consumption (p = 1.2e βˆ’ 3) with *AgentLens*. Single-agent analysis (T1 - T2): This set of tasks focuses on the system's enhancement of simple information analysis about individual agents. Without compromising task accuracy, *AgentLens* decreased time consumption by 33% for T1 (Β΅*AgentLens* = 8.02, Β΅*baseline* = 12.03) and by 50% for T2 (Β΅*AgentLens* = 44.50, Β΅*baseline* = 88.78) compared to the baseline system. The visual representation of agent characteristics in the *Agent View* eliminates the need for search operations in T1. Furthermore, the event summarization method helps participants quickly identify agent behaviors, | No. | Task | |--------------------------------------|----------| | 8.02 | | | T1 | | | What is the characteristic of the | | | agent? | | | 12.03 | | | T2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
37e9e300-307c-4d20-9a53-4b327c1d5522
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | What is the characteristic of the | | | agent? | | | 12.03 | | | T2 | | | What is the first behavior the | | | agent does after waking up? | | | 88.78 | | | 44.50 | | | T3 | | | What is the earliest conversation | | | that occurred in a certain position? | | | 14/14 | | | 14/14 | | | 14/14
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f16e919a-0808-4ba2-9a6b-2054c2824b2c
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | that occurred in a certain position? | | | 14/14 | | | 14/14 | | | 14/14 | | | 14/14 | | | 14/14 | | | 13/14 | | | 92.20 | | | 20.00 | | | T4 | | | 13/14 | | | 13/14
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c93cbf50-654c-45ca-b31a-6dc7b8f5d5e9
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | | T4 | | | 13/14 | | | 13/14 | | | 81.60 | | | 38.17 | | | How many conversations did a | | | certain agent have in a day, and | | | with whom? | | | 14/14 | | | 12/14 | | | T5 | | | Where did a certain agent first |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7a15f595-6d87-4826-ae32-cdb65db30f8a
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | | 12/14 | | | T5 | | | Where did a certain agent first | | | come up with a certain memory? | | | 29.42 | | | 17.83 | | | 51.29 | 13/14 | | 8/14 | | | 180.67 | | | T6 | | | Why would a certain agent have | | | such an memory? | | | AgentLens
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a0d6218c-13e6-41e6-974f-0959ce77f511
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | T6 | | | Why would a certain agent have | | | such an memory? | | | AgentLens | Baseline | | No. | Task | |-------------------------------------------------------------|----------| | 519.54 | | | T7 | | | 9/14 | | | 0/14 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
49bdcfdb-3bcd-47d4-8ca6-7381df10a8a7
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | | 0/14 | | | Γ— | | | Γ— | | | Discuss a certain topic's | | | propagation path among agents, | | | including the initiators and | | | disseminators. | | | eliminating the need to sift through complex log records to | | | complete T2.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d91c1b72-fbac-4acb-86e9-8907c5272fee
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | | disseminators. | | | eliminating the need to sift through complex log records to | | | complete T2. | | | 280.29 | | | T8 | | | 9/14 | | | 0/14 | | | Γ—
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
46939f7e-c371-4ff1-8be7-040e78dc33dd
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | | 0/14 | | | Γ— | | | Γ— | | | Discuss the congregation of | | | agents within a certain area | | | during a specific time period. | | | 499.37 | | | T9 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
472f8e27-cdb9-475e-85c1-4c30c82ee369
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | | T9 | | | Γ— | | | 8/14 | | | 0/14 | | | Γ— | | | Identify the instance where an | | | agent's behavior deviates from | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8c6e9583-50b8-4cb3-ac6e-11ee6d819ac5
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis | | | Identify the instance where an | | | agent's behavior deviates from | | | expectations, and provide an | | | explanation for this occurrence. | | | AgentLens | Baseline | Multi-agent analysis (T3 - T4): This set of tasks demonstrates the system's effect in assisting participants with the analysis of interactions between agents. It is noteworthy that one participant failed in both two tasks using the baseline system due to his incorrect agent selection. *AgentLens* reduced time consumption by 78.3% for T3 (Β΅*AgentLens* = 20.00, Β΅*baseline* = 92.20) and 53.2% for T4 (Β΅*AgentLens* = 38.17, Β΅*baseline* = 81.60). The visual encoding in *AgentLens*, particularly in the *Outline View*, allowed participants to quickly derive answers by observing agent interactions including dialogues and cohabitation instances. Behavior Cause analysis (T5 - T6): In this set of tasks, *AgentLens* demonstrated marked improvements over the baseline in facilitating the exploration of the cause of agent behaviors. While a part of the participants quickly obtained answers using the baseline in T5, *AgentLens* still provided a 39.4% improvement with the topic
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
703e1b04-0318-46e2-ba0c-1849bfaecd19
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.1 Individual Behavior Analysis οΏ½*AgentLens* = 38.17, Β΅*baseline* = 81.60). The visual encoding in *AgentLens*, particularly in the *Outline View*, allowed participants to quickly derive answers by observing agent interactions including dialogues and cohabitation instances. Behavior Cause analysis (T5 - T6): In this set of tasks, *AgentLens* demonstrated marked improvements over the baseline in facilitating the exploration of the cause of agent behaviors. While a part of the participants quickly obtained answers using the baseline in T5, *AgentLens* still provided a 39.4% improvement with the topic search feature (Β΅*AgentLens* = 17.83, Β΅*baseline* = 29.42). T6 presented a significant challenge for the baseline, with over 42% of participants notably failing to complete the task. P9 commented, "In the ton of plain text logs, I can't find any connection between the events at all." However, with the cause trace feature in Agent View, *AgentLens* demonstrated a substantial 71.6% improvement in it (Β΅*AgentLens* = 51.29, Β΅*baseline* = 180.67).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
66bc0e69-9297-4f3e-a2a8-8c748b589fe0
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification T7-T9 in Fig. 10 are designed to correspond to three categories of emergent phenomena arising from agent autonomy, which is not explicitly pre-programmed in LLMAS. These tasks are more complex for the participants, requiring back-and-forth exploration and analysis through multiple steps. We invited evaluators to assess the accuracy of the participant's responses. Concurrently, we observe that AgentLens demonstrates capabilities in complex analytical tasks that the traditional baseline failed to achieve, particularly in the exploration of emergent behaviors arising from agent autonomy. Topic propagation (T7): Participants are tasked with identifying the propagation path of a specific topic, such as "a Valentine's Day party will be held" or "someone | | | AgentLens | Baseline | |------------------------------------|----------|-------------|--------------------| | No. | Task | Accuracy | Time Consuming (s) | | 519.54 | | | | | T7 | | | | | 9/14 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3f1bf6c5-9458-4d36-be4e-098205316671
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification | | | | | 9/14 | | | | | 0/14 | | | | | Γ— | | | | | Γ— | | | | | Discuss a certain topic's | | | | | propagation path among agents, | | | | | including the initiators and |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9f15d0c0-4169-4830-9a24-c82c84fcb8b3
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification | | | | | propagation path among agents, | | | | | including the initiators and | | | | | disseminators. | | | | | 280.29 | | | | | T8 | | | | | 9/14 | | | | | 0/14 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f13ca5bf-7057-4c9b-b14f-1756faf45bf5
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification | | 9/14 | | | | | 0/14 | | | | | Γ— | | | | | Γ— | | | | | Identify a congregation of agents, | | | | | including the participants and | | | | | reasons for the aggregation. | | | | | 499.37
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2a8a8ef6-2102-4087-813d-8015e7c57aac
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification | | | | | reasons for the aggregation. | | | | | 499.37 | | | | | T9 | | | | | Γ— | | | | | 8/14 | | | | | 0/14 | | | | | Γ—
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
88d0bb2c-d1c0-4793-90b4-bf8ef5314f84
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification | | | | 0/14 | | | | | Γ— | | | | | Identify the instance where an | | | | | agent's behavior deviates from | | | | | expectations, and provide an | | | | | explanation for this occurrence. | | | | | AgentLens | Baseline | | | is preparing the selection for mayor". Nearly all participants consider the task to be impossible while utilizing the baseline, as "this task is akin to
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
658c9515-1266-401c-8453-5e81222aa17e
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification | | | AgentLens | Baseline | | | is preparing the selection for mayor". Nearly all participants consider the task to be impossible while utilizing the baseline, as "this task is akin to searching for a needle in a haystack" (P11). When utilizing *AgentLens*, the majority of participants swiftly opted for the Agent Memory Search within the *Outline View* to conduct searches on the propagated topics. Leveraging the representation of Agent Interaction Analysis within the view, participants could easily explore the propagation paths. Although the propagation path participants were asked to identify has multiple branches and complex scenarios, 9 participants completed the task using *AgentLens*. Agent congregation (T8): Participants are required to identify a congregation phenomenon, defined as more than three agents engaging in the same behavior at the same location, and participants should explain the reason behind it. While using the baseline, participants were compelled to conduct extended observations and iterative replays of the recorded video. Despite locating the participants of the aggregation, they remained unable to ascertain the underlying causes of the phenomena. Through the interactivity among the three views of *AgentLens*, particularly the design of *Monitor View* and *Outline View*, participants were able to rapidly detect aggregation phenomena. Coupled with the method of behavior summarization, 9 participants successfully provided explanations for the aggregations. Unexpected behavior (T9): Participants were tasked with identifying and rationalizing unexpected agent behaviors across two systems. When using the baseline system, they noted that agent behaviors appeared uniformly logical and coherent. Additionally, the requisite alternation between observing multiple agents hindered their analytical process, thereby increasing the difficulty of detecting unexpected phenomena. With the assistance of *AgentLens*, this task became more manageable. P5 identified through Outline View that "agent RP did not leave his room throughout the entire day." He traced the cause using *Agent View* and discovered that the agent had received a plan that did not require leaving the house from LLM during the planning phase for that day. Another participant P8 noticed in Agent View that
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5a7ea49a-149f-4b8d-a122-9d7499b78f6a
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.4.2 Emergent Phenomena Identification unexpected agent behaviors across two systems. When using the baseline system, they noted that agent behaviors appeared uniformly logical and coherent. Additionally, the requisite alternation between observing multiple agents hindered their analytical process, thereby increasing the difficulty of detecting unexpected phenomena. With the assistance of *AgentLens*, this task became more manageable. P5 identified through Outline View that "agent RP did not leave his room throughout the entire day." He traced the cause using *Agent View* and discovered that the agent had received a plan that did not require leaving the house from LLM during the planning phase for that day. Another participant P8 noticed in Agent View that agent TT was able to observe the activities of agent IR in the adjacent room, and this observation influenced TT's subsequent decisions. The user suggested that this phenomenon should be addressed in the LLMAS, as in human society, individuals do not possess the ability to see through walls.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0bf3f4b1-5aaf-4ddd-b849-89b0de598ed2
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis We posed 8 interview questions in Fig. 11) and a SUS questionnaire(Fig. 12) to participants. Evaluating the results of the questionnaire with feedback obtained during the interview, we reported the performance of *AgentLens* including its effectiveness and usability, offering insights into its practical application. | No. | Question | Avg. | |------------------------------------------------|-----------------------------------|--------| | 8 | 6 | | | Q1 | The event summary is informative. | | | 4.43 | | | | 2 | 10 | 2 | | Q2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f076e7df-c29c-432f-a20c-b39be814eab7
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | | 2 | 10 | 2 | | Q2 | | | | The cause trace feature is expected and | | | | credible. | | | | 4.00 | | | | 8 | 6 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6df4a43b-8de7-43bd-8a01-bc987430286d
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | | | 8 | 6 | | | Q3 | | | | The hierarchical structure is well-structured | | | | and appropriately granular. | | | | 4.43 | | | | 5 | 9
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e4525a88-170b-4f96-b67c-d9689070b93c
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | | | 5 | 9 | | | Q4 | | | | The Outline View helps me analyze the | | | | behavior of an agent. | | | | 4.64 | | | | 4
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e4dbe42f-356a-4cd7-92f6-249a7f7e6e46
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis 4.64 | | | | 4 | 10 | | | Q5 | | | | The Outline View helps me understand | | | | interactions among agents. | | | | 4.71 | | | | 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
89583ff0-d384-4a95-9c3c-9e839cb5b941
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | | 4.71 | | | | 1 | 4 | 9 | | Q6 | | | | The Monitor View helps me validate my | | | | observation. | | | | 4.57 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
adfd03a1-6473-4523-86ac-e5d514d4413e
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | | | 4.57 | | | | 2 | 6 | 6 | | Q7 | | | | The Agent View helps me analyze the agent | | | | characteristic. | | | | 4.29 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
78735c16-c11b-44c1-86eb-3c007c6d99dc
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | characteristic. | | | | 4.29 | | | | 6 | 8 | | | Q8 | | | | The Agent View helps me trace the event | | | | cause from the agent perspective with clarity. | | | | 4.57
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
71406bdd-13b3-4bc4-9cdc-c85ed9e0d4cc
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | | cause from the agent perspective with clarity. | | | | 4.57 | | | | 1 (Strongly Disagree) | | | | 2 | | | | 3 | | | | 4 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7f47b89b-dbae-4c9f-aca9-289bb8a1aa24
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis | | | | 4 | | | | 5 (Strongly Agree) | | | 7.5.1 Pipeline Effectiveness All participants agreed that the event summary is informative (Q1) and helpful. P10 commented, "The summaries are quite accurate. I can quickly locate the events and understand the evolution of an agent throughout the day with the help of the story-like subheadings." P1 felt impressed with the way of summarizing the agent's status, "like having an agent helping me monitor this LLMAS." Most participants agreed that the results of the cause trace met their expectations (Q2). They are willing to utilize the traced events to help analyze their interested events. For instance, P3 intended to incorporate the agent characteristics into the cause trace process. P5 pointed out that the cause trace served to *"unveil the black box of agent behavior."*. The hierarchical structure received unanimous endorsement from all participants (Q3). They all admitted that the hierarchical structure elucidated the level at which they could retrieve information. Especially in the analysis of complex phenomena, the behavior hierarchical structure can *"effectively reduce information density"*(P6) and "help me quickly focus on key phenomena"(P10). Nonetheless, P12, who was relatively inexperienced with the LLMAS, expressed a need for more *"user-oriented guidance"*. 7.5.2 Visual Effectiveness The *Out
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
44cf5c14-c348-4483-a4f4-120eb3d29747
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis *"unveil the black box of agent behavior."*. The hierarchical structure received unanimous endorsement from all participants (Q3). They all admitted that the hierarchical structure elucidated the level at which they could retrieve information. Especially in the analysis of complex phenomena, the behavior hierarchical structure can *"effectively reduce information density"*(P6) and "help me quickly focus on key phenomena"(P10). Nonetheless, P12, who was relatively inexperienced with the LLMAS, expressed a need for more *"user-oriented guidance"*. 7.5.2 Visual Effectiveness The *Outline View* was appreciated by the participants for agents behavior analysis (Q4). It helps participants circumvent the risk of "getting lost in the complex and chaotic agent lines" (P1) by summarizing and visualizing the agent's status. The interactive design, such as the click-to-highlight and view-details features, is "remarkably user-friendly and intuitive" (P11). In addition, the encoding of interactions among agents also received positive feedback from users (Q5). The gray box, which intertwines two lines to represent agent dialogues, *"stands out right away"* (P2). Some participants (P5, P7) indicated that they were accustomed to first spotting interesting agent dialogues in the relatively compact view, then zooming in to delve into more details. P7, who completed the task of identifying congregation phenomenon(T8) expeditiously, attributes the success to "the visualization is trying to aggregate the curves of agents who are interacting with each other." P9 commented, "If I can dynamically adjust the positions of the agents in the view, the layout can better match my expectation." The *Monitor View* was found to be useful for validating the observation (Q6). Several participants indicated that after observing the *Monitor View*, they gained more confidence in the results of their analysis. P10 mentioned, "The monitor screen adjusts as I shift my focus in different views, kind of like video software, but it offers much more details than regular video playback." P10 commended the interaction of this view in relation to the other two especially in complex tasks, "This interactive responsiveness is beneficial during my iterative analysis process." P5 suggested that the *Monitor View* could be more beneficial if it could "display the location information of other unfocused agents". The *Agent View* provides strong support
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7616947b-c51a-406d-b481-1cb09becdce2
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5 Semi-Structured Interview Analysis Several participants indicated that after observing the *Monitor View*, they gained more confidence in the results of their analysis. P10 mentioned, "The monitor screen adjusts as I shift my focus in different views, kind of like video software, but it offers much more details than regular video playback." P10 commended the interaction of this view in relation to the other two especially in complex tasks, "This interactive responsiveness is beneficial during my iterative analysis process." P5 suggested that the *Monitor View* could be more beneficial if it could "display the location information of other unfocused agents". The *Agent View* provides strong support for participants to analyze **individual agent characteristics** (Q7) and the causal relationships between agent behaviors (Q8). When observing agents of interest, they can "quickly understand the agent's personality and style of action" (P1). P6 said, "The retrospective analysis is intuitive, but the individual timeline is too long. It would be better if I could explore the causes without having to drag the view around." P4 praised the minimap in the Agent View, "When I was trying to understand agent behaviors, I love using the minimap's navigation. It helped me find the causal links fast with those cool summary emojis." P13 commented, "Developers should think about adding the agent view to their projects. Without it, agent behaviors might not seem convincing."
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
51b7b5b6-722b-4df7-b8c1-bddb827f22e7
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 7.5.3 Usability We employed the SUS questionnaire to assess the system usability, thereby reporting users' cognitive load with AgentLens. Several developers among the participants conveyed not only their intent to use *AgentLens* in the future but also to consider its integration within their LLMAS development, which has significantly encouraged us. Overall, participants provided positive comments on the usability. P9 lauded the workflow of AgentLens, "I thoroughly enjoyed the freedom of exploration the system facilitated.". P13 noted, *"The interaction is very fluid"*, but revealed a longing for automated assistance during complex analytical tasks: "It would be perfect if the system could understand the type of task I want to analyze from just a few of my clicks." Moreover, participants expressed their confidence and enjoyment when using *AgentLens*. However, several participants indicated that the system necessitates a measure of preliminary technical knowledge, despite acknowledgment from P2 that "this is principally due to the intrinsic complexity of LLMAS itself." Ultimately, we achieved an average score of 67.5 on the SUS questionnaire(refer to Fig. 12), which we find exhilarating. However, it also serves as a reminder of the necessity for future optimization.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
74985c59-8708-4df0-bb65-11bf38cd79de
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 8 Discussion In this section, we commence by encapsulating the lessons collected from the user feedback, including providing comparisons within an agent and enabling modifications for system configurations. Subsequently, we deliberate on the generalizability, as well as the limitations and future work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1737d87c-12ff-47d9-874a-fe112ffd3211
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 8.1 Lessons Learned Providing comparison within an agent. During the evaluation process, we recorded some specific interaction patterns among the users, although they did not actively mention them in the interview. Some users frequently analyzed the behaviors of a single agent across various temporal intervals. For instance, they compared the behaviors of an agent at 8 a.m. on February 13 with those at the same time on February 14. To facilitate this, they typically delved into the Outline View to explore the events associated with the agent at these two distinct time points. Observing disparate agent behaviors across separate days, users inferred the existence of certain agent behavior patterns. This discovery inspires us to further investigate strategies for visually "folding" the agent's timeline, such as overlaying two periods of the timeline, thereby aiding users in rapidly comparing and encapsulating the agent's behavior patterns. Enabling modifications for system configurations. Participants appreciated the aid provided by the novel behavior summarization method proposed in our study, which effectively mitigates information overload. Nevertheless, some users demonstrated an interest in understanding how these summaries are generated. They endorsed the summarization method after we clarified the details, as in Fig. 5. However, they still gave specific requirements, such as customizing the source of the summary contents. For example, one participant exhibited indifference towards the agent's location information. Such feedback motivates us to enable users to tailor the extraction pipeline in future research, thereby enhancing the usability of the exploratory analysis in a user-centric manner.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da593346-e8bd-4024-a2da-ddfaa2b53e17
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 8.2 Generalizability Our work builds upon the existing LLMAS, designed for the surveillance and analysis of agent behaviors. While we conduct our research based on Reverie, it can be seamlessly integrated into other LLMAS analysis processes. Moreover, the key components of our system, such as the Outline View and *Agent View*, are decoupled from the LLMAS implementations. The *Monitor View* is a representation of the replay monitor ubiquitous in most LLMAS. Developers can easily provide their own monitoring snapshots to populate this view. Therefore, our work is general to various LLMAS and can be used directly by developers in their LLMAS. Our system's capabilities extend beyond LLMAS analysis and can be applied to a wide range of applications, such as the analysis of multi-person communities and the development of open-world games. For the analysis of multiperson communities, the *Outline View* and *Monitor View* can assist in simultaneously examining numerous actions on multiple subject timelines. This enables analysts to rapidly comprehend the main behaviors of different entities and their interactions. Within the realm of open-world games, the incorporation of the *Outline View* can aid players in exploring non-player characters (NPC) behaviors in an immersive manner. Game developers can also utilize the Agent View to analyze and optimize the NPCs in the development stages, fostering the creation of more intelligent NPCs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3c1846b4-3e39-4564-8b3f-3c1cb7a63a56
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 8.3 Limitations And Future Work Despite the encouraging performance of *AgentLens*, there are several limitations and potential areas for further research. Provide a more flexible interface. The current layout of the agent line and position block in the *Outline View* is pre-computed. Despite considerable efforts to minimize the crossover of lines, it remains difficult to avoid, particularly as the number of agents and the evolutionary timespan of LLMAS increase. One of our future tasks is to provide a more flexible layout for the *Outline View*, automatically reorganizing the view based on the user's interest regarding agent events. Allow users to modify pre-configured settings. AgentLens introduces a set of pre-configured settings for users, such as the granularity of *Timeline Segmentation* and the similarity threshold for *Cause Trace*. These configurations optimize the exploration experience for users, making better trade-offs between the intricate nature of the information and its succinct presentation. Nonetheless, some users expressed a desire to modify these presets during the analysis process to facilitate more flexible exploration. To accommodate these needs, we plan to incorporate a customizable preset panel for users in our system. Support interactive exploration among different agent execution strategies. In this work, we focus on facilitating users' exploration and analysis of the LLMAS operational process. However, this process is significantly influenced by agent execution strategies like planning methods and memory mechanisms. For example, the agent may choose to first make a high-level plan to divide tasks into several subtasks that can be completed in different orders, or choose to adopt a depth-first strategy that adaptively changes its target based on the incoming information. While the design of an effective agent planning strategy is attracting an increasing amount of research attention [17], [86]–[88], how to interactively analyze the effect of different planning strategies in LLMAS is still unexplored. Moreover, analyzing the influence of agent memory mechanisms on the agent execution process is an area of considerable interest. While currently the agent memory mechanisms are usually hardcoded in the LLMAS program, allowing users to interactively modify the agent's memory content or recall strategies and visually examine its downstream effects could be crucial for better understanding and optimizing LLMAS. Extend to multimodal LLMAS. Text-based interaction has been widely adopted in most existing LLMAS [10], [12], [16] in which agents are predicated on textual perception and decision-making. Even embodied agents [19], [20] typically transmute
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
982c3b87-659d-4c91-8d12-ec43a520f9b5
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 8.3 Limitations And Future Work Moreover, analyzing the influence of agent memory mechanisms on the agent execution process is an area of considerable interest. While currently the agent memory mechanisms are usually hardcoded in the LLMAS program, allowing users to interactively modify the agent's memory content or recall strategies and visually examine its downstream effects could be crucial for better understanding and optimizing LLMAS. Extend to multimodal LLMAS. Text-based interaction has been widely adopted in most existing LLMAS [10], [12], [16] in which agents are predicated on textual perception and decision-making. Even embodied agents [19], [20] typically transmute the perceived multimodal data like imagery and auditory inputs into a textual format for later processing. However, with the popularity of multimodal LLMs [6], [89], the future may see the emergence of LLMAS in which agents genuinely perceive, think, and act based on multimodal data. Future work can explore how the agents interact with multimodal data (e.g., image interpretation [90] and creation [91]) in this authentic multimodal LLMAS.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
61f37788-ffa9-411a-9c35-d0c9d779c7b8
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 9 Conclusion This work presents a visualization approach for LLMAS, addressing the challenge of analyzing complex agent behaviors during LLMAS evolution. We introduce a general pipeline that establishes a hierarchical behavior structure from the raw execution events of LLMAS, including a behavior summarization algorithm and a cause-tracing method. Our system, *AgentLens*, offers an intuitive and hierarchical representation of the evolution of multiple agents, enabling users to interactively investigate behavior details and causes. Through two usage scenarios and a user study, we have demonstrated the performance of our pipeline and visual designs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3c0c498c-7b85-42bd-88b0-51d10973e0ce
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## Acknowledgments We would like to thank Ke Wang and Minfeng Zhu for their kind help. We also would like to thank the anonymous reviewers for their insightful comments. This paper is supported by the National Natural Science Foundation of China (62132017, 62302435), Zhejiang Provincial Natural Science Foundation of China (LD24F020011), and "Pioneer" and "Leading Goose" R&D Program of Zhejiang (2024C01167).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
90535d91-af9c-4328-acf5-095db965bcc1
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 10 Biography Section Jiaying Lu is currently a Master student in the State Key Lab of CAD&CG at Zhejiang Uni- versity, China. She received the B.E. degree in Computer Science and Technology from the Zhejiang University, China in 2022. Her research interests include LLM agent and visual analytics. Bo Pan is currently a Ph.D. candidate in the State Key Lab of CAD&CG at Zhejiang University, China. He received the BS degree in Electrical and Computer Engineering from the University of Illinois Urbana-Champaign and Zhejiang University in 2022. His research interests include visualization and deep learning. Jieyi Chen is currently a Master student in the State Key Lab of CAD&CG at Zhejiang Univer- sity, China. She received the B.E. degree from the Zhejiang University of Technology, China in 2023. Her research interests include visualiza- tion and visual analytics. Yingchaojie Feng is currently a Ph.D. candi- date in the State Key Lab of CAD&CG at Zhe- jiang University, China. He received the B.E. degree in software engineering from the Zhe- jiang University of Technology, China in 2020. His research interests include data visualiza- tion, human-computer interaction, and natural language processing. For more details, please refer to https://yingchaojiefeng.github.io/. Jingyuan Hu is an undergraduate in the Chu Kochen Honors College at Zhejiang University. His research interests include visualization and visual analytics. Yuchen Peng is currently a Ph.D candidate in the State Key Laboratory of Blockchain and Data Security from the Zhejiang University. He re- ceived the B.E. degree in computer science and technology from the Zhejiang University, China in 2022. His research interests include database system and data management in machine learn- ing. Wei Chen is a professor in the State Key Lab of CAD&CG at Zhejiang University. His cur- rent research interests include visualization and visual analytics. He has published more than 80 IEEE/ACM Transactions and IEEE VIS pa- pers. He actively served in many leading confer- ences and journals,
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5fded7ec-a82d-4d71-8c05-e04233024c15
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 10 Biography Section Data Security from the Zhejiang University. He re- ceived the B.E. degree in computer science and technology from the Zhejiang University, China in 2022. His research interests include database system and data management in machine learn- ing. Wei Chen is a professor in the State Key Lab of CAD&CG at Zhejiang University. His cur- rent research interests include visualization and visual analytics. He has published more than 80 IEEE/ACM Transactions and IEEE VIS pa- pers. He actively served in many leading confer- ences and journals, like IEEE PacificVIS steer- ing committee, ChinaVIS steering committee, paper cochairs of IEEE VIS, IEEE PacificVIS, IEEE LDAV and ACM SIGGRAPH Asia VisSym. He is an associate editor of IEEE TVCG, IEEE TBG, ACM TIST, IEEE T-SMC-S, IEEE TIV, IEEE CG&A, FCS, and JOV. More information can be found at: http://www.cad.zju.edu.cn/home/ chenwei.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
56db3f35-fd2c-44d5-a1aa-d02370e69512
## Aqa-Bench: An Interactive Benchmark For Evaluating Llms' Sequential Reasoning Ability Siwei Yang * 1 **Bingchen Zhao** * 2 **Cihang Xie** 1 This paper introduces AQA-Bench, a novel benchmark to assess the sequential reasoning capabilities of large language models (LLMs) in algorithmic contexts, such as depth-first search (DFS). The key feature of our evaluation benchmark lies in its interactive evaluation protocol - for example, in DFS, the availability of each node's connected edge is contingent upon the model's traversal to that node, thereby necessitating the LLM's ability to effectively remember visited nodes and strategize subsequent moves. We comprehensively build AQA-Bench with three different algorithms, namely binary search, depthfirst search, and breadth-first search, and to evaluate the sequential reasoning ability of 12 different LLMs. Our investigations reveal several interesting findings: (1) Closed-source models like GPT-4 and Gemini generally show strong sequential reasoning ability, significantly outperforming open-source LLMs. (2) Naively providing interactive examples may inadvertently hurt few-shot performance. (3) A very limited number of predecessor steps following the optimal policy can substantially boost small models' performance. (4) The scaling correlation between performance and model size is not always significant, sometimes even showcasing an inverse trend. We hope our study can catalyze future work on advancing the understanding and enhancement of LLMs' capabilities in sequential reasoning. The code is available at https://github.com/UCSC- VLAA/AQA-Bench.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
92656a11-b654-46e7-ac94-44bc329d5db8
## 1. Introduction Recent advancements in Large Language Models (LLMs) have led to impressive strides in reasoning across a diverse array of linguistic tasks, as evidenced by a growing body of research (Wei et al., 2022; Wang et al., 2022; Brown et al., 2020; OpenAI, 2023). The reasoning capabilities of these models have typically been assessed through benchmarks focusing on arithmetic reasoning (Cobbe et al., 2021; Ling et al., 2017), symbolic inference (Suzgun et al., 2022), knowledge (Hendrycks et al., 2020), and science understanding (Hendrycks et al., 2021b). These benchmarks require LLMs to engage in multi-step reasoning, leveraging both the context provided by the question and their internally learned world knowledge (Wei et al., 2022). Nevertheless, a critical limitation of these existing benchmarks is their reliance on one-off interactions, predominantly in the form of multiple-choice questions or singleresponse queries. While these metrics offer valuable insights into the LLMs' reasoning abilities, they fall short in evaluating other crucial aspects of intelligence. Specifically, they do not assess the models' capacity for procedural adherence and active memory maintenance, elements vital for more complex, sequential reasoning tasks. In this work, we aim to bridge this evaluation gap in benchmarks, thereby offering a better understanding and measuring the cognitive capabilities of LLMs in mimicking human-like reasoning processes. To this end, we hereby develop an interactive Q&A benchmark, referred to as AQA- Bench, specifically designed to quantitatively assess LLMs' proficiency in executing predefined algorithmic procedures. These procedures necessitate basic reasoning over observed data, coupled with the updating of an internal or external state that represents a specific data structure. One such example is solving a maze problem using the depth-first search algorithm - In each interactive instance, the model is provided with only the node ID it occupies and the edges connected to that node, representing the observed data; based on this current information and its visiting history, the model must then determine which edge to follow to progress to the subsequent node. Through this interactive design, our AQA-Bench can effectively gauge the LLMs' capabilities in algorithmic reasoning We empirically build AQA-Bench utilizing three algorithms: (1) Binary search, wherein the model's task is to deduce a number within a specified range, ideally employing the binary search algorithm. (2) Depth-first search (DFS), where the model navigates a graph with the objective of mapping all nodes
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8d00c012-4ad7-402f-8c1e-bd18ce727d6d
## 1. Introduction and the edges connected to that node, representing the observed data; based on this current information and its visiting history, the model must then determine which edge to follow to progress to the subsequent node. Through this interactive design, our AQA-Bench can effectively gauge the LLMs' capabilities in algorithmic reasoning We empirically build AQA-Bench utilizing three algorithms: (1) Binary search, wherein the model's task is to deduce a number within a specified range, ideally employing the binary search algorithm. (2) Depth-first search (DFS), where the model navigates a graph with the objective of mapping all nodes and edges. (3) Breadth-first search (BFS), similar to DFS, but with an explicit requirement for the model to apply the BFS algorithm instead. The corresponding evaluations reveal four interesting findings: - The closed-source models like GPT-4 and Gemini strongly dominate all open-source LLMs on sequential reasoning. - Naively providing interactive examples may inadvertently hurt few-shot performance. This trend is observed even with the advanced GPT-4 and Gemini-Pro in certain AQA-Bench environments. - Given a few predecessor steps under the optimal policy, the performance of small models can be significantly improved, sometimes even comparable to large models. - The scaling correlation between performance and model size is not always significant, sometimes even showcasing an inverse trend. This contradicts common assertions in LLM development and points to an oversight of sequential reasoning capabilities in current LLM research. We hope our AQA-Bench can serve as a useful benchmark for future research focused on evaluating and enhancing the sequential reasoning abilities of LLMs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c69d0621-b60e-4f23-8502-795beb0e73dd
## 2. Evaluation Environments 2.1. Base Environment We hereby introduce the design of three basic interactive environments. In each environment, instructions about the objective are initially fed to the model via the system prompt, while the information about the current environment is only revealed to the model following its response. Our design makes sure that the key information for making decisions can only be gained by interacting with the environment so that the model can be evaluated based on how it can plan and execute the optimal strategy. Thus, the tested model is forced to perform sequential planning by actively exploring the environment and adjusting its response according to feedback alternately. Base Env 1: GuessNum. The objective of the GuessNum environment is for the model to accurately predict a number predetermined by the evaluator. During each interaction, the model interacts with the environment by guess a number and receives feedback indicating whether its guessed number is higher or lower than the predetermined number. The optimal strategy in this scenario involves the model implementing a binary search. Consequently, the performance in this environment serves as an indicator of the model's understanding of the binary search algorithm. Base Env 2: DFS. In this environment, the model is tasked with navigating a graph using the DFS algorithm. Initially, the model is presented with information about its current node and the edges connected to that node. The model interacts with the environments by decide which edge it will follow, and then the environment will update the model with the information of the newly reached node and its associated edges. The model's performance is evaluated based on its adherence to the DFS policy. Critical to the evaluation is the ability to comprehend and implement the concept of a first-in-last-out stack, along with maintaining a memory of previously visited nodes. The process of the DFS algorithm is described in the instructions to reduce difficulty. Base Env 3: BFS. This environment closely mirrors the DFS environment in structure but diverges in its core algorithmic requirement, instructing the model to employ the BFS algorithm for graph navigation. This key distinction enables the BFS environment to specifically assess the model's comprehension of the *first-in-first-out* queue principle, a fundamental aspect of BFS.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3594376c-8390-4924-864b-afe5045ae295
## 2.2. Embodied Environment We additionally design embodied environments where the information about each base environment is replaced with more real-life background descriptions. These embodied environments can then be used to assess if the model can perform sequential reasoning with irrelevant information, and if the model can abstract algorithmic problems from real-life situations and find the optimal algorithms. Embodied Env 1: Coin (GuessNum). The tested model is required to play a hero encountering a witch guarding a chest of gold coins in a hidden temple. To claim the prize, the model needs to guess the number of gold coins with limited chances. Embodied Env 2: CaveDFS (DFS). Rather than navigating a graph, the model is required to play as an explorer to visit all the caves in an underground cave system in as fewest steps possible. Unlike the DFS environment, the model is not explicitly required to use any algorithm but the objective naturally demands the DFS algorithm. Embodied Env 3: CaveBFS (BFS). Similar to CaveDFS environment, this environment requests the tested model to traverse an underground cave system as well but as a group. The group can split into smaller groups to visit adjacent caves without backtracing. This environment doesn't explicitly call for a specific algorithm as well.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7a677504-92fd-4498-833d-6f6b4d1ec09a
## 3. Evaluation 3.1. Metrics To holistically assess performance in each environment, we design two specific metrics. The first is the *goal metric*, which evaluates how close is the model's final output to the ground truth; the second is the *policy metric*, which measures the efficiency of the model's policy. For the goal metric, we adopt an error-based approach where lower scores are preferable. This design choice enables the goal metric at each intermediate step can be accumulated as the policy metric to measure how fast the model's output converges to the final objective. Note that we typically prioritize the goal metric over the policy metric when comparing the performance of two models. This hierarchy in metric evaluation is crucial due to the observed tendency of lower-performing models to prematurely exit the evaluation process. Such early termination is typically a result of generating invalid responses, therefore leading to a lower goal metric score but sometimes a higher policy metric score. GuessNum (Coin) requires the model to accurately guess the number specified by the evaluator. For the goal metric in this environment, we use the minimal error of the responses from the model to the target number, which is defined as $$\text{Err}_{\text{min}}=\max_{i}\frac{|g_{i}-\hat{g}|}{H-L+1},\tag{1}$$ where $g_{i}$ is the guess model produced in the $i$-th step of interaction, $\hat{g}$ is the target number, and $H$ and $L$ denote the upper and lower bound of the guessing range. As for the policy metric, we accumulate the error between each guess and the target number, and define the metric as: $$\text{Err}_{\text{sum}}=\sum_{i}\frac{|g_{i}-\hat{g}|}{H-L+1}.\tag{2}$$ Given the similar objectives of the **DFS (CaveDFS)** and BFS (CaveBFS) environments, we employ a consistent metric to evaluate performance in both. The primary goal in these environments is to achieve full graph traversal. Accordingly, we define the goal metric, denoted as Gmin, to measure the extent of node coverage in relation to the total number of nodes in the graph. Let M represent the total number of nodes in the graph, and οΏ½
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d329eb0d-1b88-4e88-bd03-ef9eb347afea
## 3. Evaluation 3.1. Metrics }\frac{|g_{i}-\hat{g}|}{H-L+1}.\tag{2}$$ Given the similar objectives of the **DFS (CaveDFS)** and BFS (CaveBFS) environments, we employ a consistent metric to evaluate performance in both. The primary goal in these environments is to achieve full graph traversal. Accordingly, we define the goal metric, denoted as Gmin, to measure the extent of node coverage in relation to the total number of nodes in the graph. Let M represent the total number of nodes in the graph, and ⟨a⟩i denote the set of nodes visited by the model up to the i-th interaction step. The goal metric is then formulated as follows: $$\mathbf{G}_{\min}=1-\max_{i}\frac{|\langle a\rangle_{i}|}{M}=1-\frac{|\langle a\rangle_{-1}|}{M},\tag{3}$$ where |⟨aβŸ©βˆ’1| is the number of unique nodes visited by the model by the end of the interaction. Similarly, we define the policy metric, Gsum, as the cumulative gap in graph coverage throughout the interaction: $$\mathbf{G}_{\text{sum}}=\sum_{i}1-\frac{|\langle a\rangle_{i}|}{M}\tag{4}$$ Furthermore, we introduce the ratio between the number of steps Kfollow that the model follows the algorithm and the total number of steps the model takes N as a metric to access the efficiency of models' policy: $$\text{ACC}=\frac{\text{K}_{\text{follow}}}{\text{N}}\tag{5}$$ where Kfollow represents the number of steps in which the model follows the algorithm until its first deviation, and Ktotal is the total number of steps in which the model adheres to the algorithm throughout the interaction.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
090fc73a-4b37-40cc-b729-1d6cf64d982d
## 3.2. In-Context Examples Wei et al. (2022) argue LLM's strong reasoning abilities are, in part, attributable to their in-context learning abilities. Built upon this insight, we also incorporate it into the design of our benchmark. In Fig. 2, we outline our protocol for testing in-context examples within our benchmark. Specifically, it involves integrating a series of interaction examples between the optimal teacher model and the environment into the model's context. These in-context examples are expected to serve as a foundational reference, aiding the model in comprehending the expected interaction dynamics and decision-making processes in each specific environment. $$|\mathbb{C}\mathbb{E}=2$$ System: You are required to guess a number between 0 and 8. O: Start A: 4 O: The true number is smaller than 4. A: 2 O: The true number is bigger than 2. A: 3 O: Right answer. The true number is equal to 3. A: 4 O: The true number is bigger than 4. A: 6 O: Right answer. The true number is equal to 6. A: 4 O: 4 O: The true number is bigger than 4. A: 4 O: The true number is bigger than 4. A: 4 O: The true number is bigger than 4. A: 4 O: The true number is bigger than 4. A: 4 O: The true number is bigger than 4. Figure 2. In-context example tests, the interactions of the optimal policy and the environment on the other test cases are used as examples, which can provide additional contextual information about the algorithm for in-context learning. You are required to guess a number between 0 and 1024 Q: Start A: 5 Q: The true number is bigger than 5 A: 20 Q: The true number is bigger than 20 A: ...
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
60cab15e-0fe3-47f0-942c-245003282273
## (A) Interactions W/O Teacher-Guiding System: You are required to guess a number between 0 and 8. Q: Start A: 5 T: 4 T: 2 Q: Start A: 5 Q: The true number is smaller than 4. A: 1 Q: The true number is bigger than 2. A: … T: ...
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
90049cfb-b60a-49fc-befd-adaa63131d82
## (B) Interactions W/ Teacher-Guiding placed with the optimal ones to ensure no error will accumulate.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
333d088d-5228-4eff-a82a-bcc8e9c853f0
## 3.3. Teacher Guiding Directly evaluating the model on the interaction will lead to error accumulation. Such errors can result in catastrophic failure, even with strong models, due to the dependency of each step on its predecessors. However, it is also interesting to check whether correct interaction steps may also improve models' generation. To investigate this issue, we implement a strategy termed *Teacher-Guiding*. This approach involves using the intented algorithm as the optimal policy, tailored for each environment, which acts as a teacher model. The teacher model amends the outputs of the subject model, ensuring that any incorrect decision made at an intermediate step does not adversely impact subsequent interactions. The implementation of this procedure is illustrated in Fig. 31. We specifically designed a metric named *Per-step ACC* for this mode. At the k-th step, the Per-step ACC is $$\text{PSACC}_k=\frac{\mathcal{N}_k}{\tilde{\mathcal{N}}_k}\tag{6}$$ where Nk is the number of test cases in which the model follows the algorithm at the k-th step, and Λ† Nk is the number of test cases of which the optimal policy takes at least k steps. Thus, PSACCk can be roughly viewed as the probability of the model following the algorithm at the k-th step given that the algorithm is followed in all the predecessor steps. The averaged PSACC across all kmax steps in the optimal policy is used to evaluate models' overall self-guiding ability $$\text{PSACC}_{\text{avg}}=\frac{\sum_{k}\text{PSACC}_{k}}{k_{\text{max}}}\tag{7}$$
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6cf89de3-52ab-47f0-b7d6-4160cfedcc2f
## 4. Experiments We evaluate models on all base and embodied environments. For the GuessNum and Coin environment, we set the target number between 32 and 32800, For the DFS and CaveDFS environment, we set the number of graph nodes to 8. For the BFS and CaveBFS environment, we set the number of graph nodes to 15. The worst-case runtime of the optimal policy for all environments is about 15 steps so we run evaluation with the maximum number of interactions being 20. In addition to this EASY mode, we also develop a HARD mode with a target range of 32 βˆ’ 3.3 βˆ— 107 for GuessNum and Coin, 13 nodes for DFS and CaveDFS, 25 nodes for BFS and CaveBFS. The optimal worst-case runtime is about 25 steps and the maximum number of interaction steps is 30. We report results under the EASY mode by default. For easier comparison, we divided models into 4 categories according to the number of parameters: - Small models with < 10B parameters: Llama-7B- chat (Touvron et al., 2023), Vicuna-7B-v1.5-16K (Chiang et al., 2023), Mistral-7B-Instruct-v0.2 (Jiang et al., 2023), DeepSeek-LLM-7B (Bi et al., 2024) and DeepSeek-MoE-16B (Dai et al., 2024). - Medium models with β‰₯ 10B and < 50B parameters: Llama2-13B-chat, Vicuna-13B-v1.5-16K and Mixtral- 8x7B-Instruct-v0.1 (Jiang et al., 2024). - Large models with β‰₯ 50B parameters: Llama2-70B- chat and DeepSeek-LLM-67B. - Closed-source models: GPT-3.5-Turbo, GPT-4-Turbo (OpenAI, 2023), and Gemini-Pro (Team et al., 2023). For mixture-of-experts models (*e.g.*, DeepSeek-MoE-16B, Mixtral-8x7B-Instruct-v0.1), we only consider the number of activated parameters during inference. All evaluations are run with zero-shot and without
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f9ae382f-35f1-4b97-bd7e-c03d2abfbf0b
## 4. Experiments parameters: Llama2-70B- chat and DeepSeek-LLM-67B. - Closed-source models: GPT-3.5-Turbo, GPT-4-Turbo (OpenAI, 2023), and Gemini-Pro (Team et al., 2023). For mixture-of-experts models (*e.g.*, DeepSeek-MoE-16B, Mixtral-8x7B-Instruct-v0.1), we only consider the number of activated parameters during inference. All evaluations are run with zero-shot and without teacher-guiding by default.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e99c1f78-15c7-4181-8289-6d4d3a96431c
## 4.1. Re-Productivity And Variance Although test cases in our AQA-Bench can be generated dynamically, we pre-generated a test set with 400 test cases for each base environment under the EASY mode for simpler re-production. The final scores are averaged among test cases. Given that GuessNum, DFS and BFS each can have at most 32768, 1.18βˆ—106, 9.17βˆ—1016 test cases, the quantity of our pre-generated test cases is somewhat modest. To verify that evaluation results with this number of test cases are valid and representative of the models' performance in each environment, we generated another 3 equally sized test sets and evaluated Llama2-7B-Chat and Vicuna-7B-v1.5-16K on all 4 test sets. To quantify the variance of results, we define that $$\text{Avg}=\frac{\sum\left\{m_{i}\right\}}{|\left\{m_{i}\right\}|}\tag{8}$$ $$\text{Margin}_{\text{min}}=\text{Avg}-\text{min}(\left\{m_{i}\right\})$$ (9) $$\text{Margin}_{\text{max}}=\text{max}(\left\{m_{i}\right\})-\text{Avg},\tag{10}$$ where min({mi}) is a set of the same metric from different evaluation runs. Marginmin and Marginmax can be viewed as a measurement for variance of evaluation results. As shown in Tabs. 1 and 2, Marginmin and Marginmax are relatively low compared to metric difference across models, which shows that evaluation results drawn from our pregenerated test set (with only 400 cases) can sufficiently represent the tested models' performance in environment with this level of complexity. Therefore, we only report results from the first test set rather than from all four test sets in the following context to save computation. Another factor that may affect our experimental conclusion is the randomness of the model itself. For open-source models and Gemini-Pro, we disable the random sampling in all the experiments. But for GPT models, as they can only be accessed via OpenAI API, we cannot turn off such model randomness. However, as shown in the supplementary Tabs. 8 and 9, the variance observed in GPT models is relatively minor
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
236ab72b-03ad-48a6-a2b3-3705dc0ba272
## 4.1. Re-Productivity And Variance tested models' performance in environment with this level of complexity. Therefore, we only report results from the first test set rather than from all four test sets in the following context to save computation. Another factor that may affect our experimental conclusion is the randomness of the model itself. For open-source models and Gemini-Pro, we disable the random sampling in all the experiments. But for GPT models, as they can only be accessed via OpenAI API, we cannot turn off such model randomness. However, as shown in the supplementary Tabs. 8 and 9, the variance observed in GPT models is relatively minor. For the HARD mode, we pre-generated 1500 test cases for each environment of which the variance study can be found in the supplementary Tabs. 10 and 11.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b97bf015-08ff-4ad6-a65f-570e48ba25db
## 4.2. Main Results Base environments. We start by investigating models' algorithmic sequential reasoning abilities by running evaluations in three base environments: GuessNum, DFS, and BFS. These evaluations were conducted naively, without the incorporation of in-context examples or teacher guidance. As shown in Tab. 3, closed-source models like GPTs and Gemini generally exhibit much superior performance compared to all tested open-source models; The only exception is the DFS environment, where open-source models outperform | GuessNum | DFS | BFS | |--------------------|-------|-------| | Err | | | | min | | | | ↓ | | | | Err | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5ce09827-d299-4a9e-87e0-f05e4331b276
## 4.2. Main Results | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
07325839-0aa9-4a31-addf-6e139c580347
## 4.2. Main Results | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | Llama2-7B-chat | | | | Avg | 0.265 | 7.895 | | Margin | | | | min | | | | 0.009 | 0.185 | 0 | | Margin | | | | max | | | | 0.006 | 0.168 | 0 | | Vicuna-7B-v1.5-16K | | | | Avg | 0.476 | 9.606 | | Margin |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6f7f9b12-ee39-40af-b1d1-37792437621c
## 4.2. Main Results | | | | 0.006 | 0.168 | 0 | | Vicuna-7B-v1.5-16K | | | | Avg | 0.476 | 9.606 | | Margin | | | | min | | | | 0.017 | 0.366 | 0 | | Margin | | | | max | | | | 0.016 | 0.341 | 0 | | Coin | CaveDFS | CaveBFS | |--------------------|-----------|-----------| | Err | | | | min | | | | ↓ | | | | Err | | | | sum
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2de3d3cb-fccd-4d20-8ed0-aeddf0152110
## 4.2. Main Results | | | | ↓ | | | | Err | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a1844d98-178a-4768-913c-c6f77e4011f7
## 4.2. Main Results | | | | G | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | G | | | | min | | | | ↓ | | | | G | | | | sum | | | | ↓ | | | | ACC
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
43d7e3ab-dc01-4200-b593-ea9de8b1c7e3
## 4.2. Main Results | | | | sum | | | | ↓ | | | | ACC | | | | ↑ | | | | Llama2-7B-chat | | | | Avg | 0.079 | 5.256 | | Margin | | | | min | | | | 0.005 | 0.238 | 0 | | Margin | | | | max | | | | 0.008 | 0.269 | 0 | | Vicuna-7B-v
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
252c8dc1-a4b3-4146-bd11-8d5a0cf55228
## 4.2. Main Results | | Margin | | | | max | | | | 0.008 | 0.269 | 0 | | Vicuna-7B-v1.5-16K | | | | Avg | 1 | 1 | | Margin | | | | min | | | | 0.000 | 0 | 0 | | Margin | | | | max | | | | 0.000 | 0 | 0 | GPT-3.5-Turbo, but still not as good as GPT-4-Turbo and Gemini-Pro. It is particularly worth mentioning that GPT- 4-Turbo almost achieves the task goal in all test cases with a substantially low goal metric. These findings reveal a significant gap in sequential reasoning abilities between the open-source models and
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ef540d75-bc21-4096-8072-b40d0f249243
## 4.2. Main Results | | 0.000 | 0 | 0 | GPT-3.5-Turbo, but still not as good as GPT-4-Turbo and Gemini-Pro. It is particularly worth mentioning that GPT- 4-Turbo almost achieves the task goal in all test cases with a substantially low goal metric. These findings reveal a significant gap in sequential reasoning abilities between the open-source models and the closed-sourced GPT-3.5-Turbo, GPT-4-Turbo and Gemini-Pro models. Next, among open-source models, one interesting observation is that more recently released models (*e.g.*, Mistral, Deepseek-LLM) are arguably better than relatively older ones (*e.g.*, Llama, Vicuna). For example, Mixtral-8x7B- Instruct-v0.1, which is claimed to be better than Llama2- 70B-chat, does excel Llama2-70B-chat in GuessNum and BFS but falls short in DFS. As for the DeepSeek-MoE- 16B model, which outperforms Llama2-7B-chat on conventional language benchmarks (Bi et al., 2024), underperforms Llama2-7B-chat across all three tested environments. Lastly, in the more challenging HARD mode, GPT-4-Turbo continues to demonstrate its superior performance, significantly outperforming all other models in terms of capabilities. It is also interesting to note that Mixtral-8x7B-Instructv0.1, while still lagging behind Llama2-70B-chat in the DFS environment, surpassed it by an even much larger margin in both the GuessNum and Coin environments. Complete results under the HARD mode with all 12 models can be found in the supplementary Tabs. 14 and 15. Embodied environments. The findings from the embodied environments, as detailed in Tab. 4, largely mirror the Model GuessNum DFS BFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4d45057f-2aab-4069-a7a6-eb149ed5bb6d
## 4.2. Main Results 2-70B-chat in the DFS environment, surpassed it by an even much larger margin in both the GuessNum and Coin environments. Complete results under the HARD mode with all 12 models can be found in the supplementary Tabs. 14 and 15. Embodied environments. The findings from the embodied environments, as detailed in Tab. 4, largely mirror the Model GuessNum DFS BFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Small < 10B Llama2-7B-chat 0.26 7.71 0.00 0.58 3.73 0.24 0.60 9.80 0.00 Vicuna-7B-v1.5-16K 0.46 9.24 0.00 0.65 5.79 0.15 0.84 10.29 0.03 Mistral-7B-Instruct-v02 0.06 2.02 0.00 0.49 2.72 0.61 0.24 8.72 0.13 DeepSeek-LLM-7B 0.43 9.24 0.00 0.34 6.59 0.36 0.52 11.20 0.06 DeepSeek-MoE-16B 1.00 1.00 0.00 0.63 4.78 0.07 0.88 8.18 0.02 10B ≀ Medium < 50B Llama2-13B-chat 0.01 3.24 0.00 0.34 5.98 0.41 0.65 10.59 0.05 Vicuna-13B-v1.5-16K 0.39 8.31 0.00 0.66 13.23 0.12 0.81 15.61 0.05 Mixtral-8x7B-Instruct-v01 0.00 0.69 0.00 0.47 3.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a776731-71f1-4246-8a66-420abd8fc5cf
## 4.2. Main Results lama2-13B-chat 0.01 3.24 0.00 0.34 5.98 0.41 0.65 10.59 0.05 Vicuna-13B-v1.5-16K 0.39 8.31 0.00 0.66 13.23 0.12 0.81 15.61 0.05 Mixtral-8x7B-Instruct-v01 0.00 0.69 0.00 0.47 3.32 0.57 0.14 7.36 0.21 Large β‰₯ 50B Llama2-70B-chat 0.11 2.64 0.00 0.33 4.39 0.44 0.28 10.14 0.06 DeepSeek-LLM-67B 0.12 5.62 0.00 0.40 4.34 0.42 0.45 11.59 0.09 Closed-source GPT-3.5-Turbo 0.00 0.51 0.01 0.35 5.21 0.61 0.11 6.68 0.52 GPT-4-Turbo 0.00 0.50 0.46 0.03 3.93 0.94 0.00 6.08 0.38 Gemini-Pro 0.00 0.63 0.00 0.25 3.71 0.76 0.06 7.39 0.17 Model Coin CaveDFS CaveBFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Small < 10B Llama2-7B-chat 0.07 5.02 0.00 0.50 4.65 0.33 0.76 5.66 0.05 Vicuna-7B-v1.5-16K 1.00 1.00 0.00
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
27d0f5aa-1ffd-48a6-a2a6-2a1667753452
## 4.2. Main Results CaveDFS CaveBFS Errmin ↓ Errsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Gmin ↓ Gsum ↓ ACC ↑ Small < 10B Llama2-7B-chat 0.07 5.02 0.00 0.50 4.65 0.33 0.76 5.66 0.05 Vicuna-7B-v1.5-16K 1.00 1.00 0.00 0.54 8.04 0.21 0.72 14.39 0.07 Mistral-7B-Instruct-v02 0.07 3.59 0.00 0.49 4.87 0.48 0.27 9.86 0.11 DeepSeek-LLM-7B 0.39 8.82 0.00 0.58 9.08 0.16 0.77 10.67 0.04 DeepSeek-MoE-16B 1.00 1.00 0.00 0.71 2.99 0.11 0.89 2.81 0.01 10B ≀ Medium < 50B Llama2-13B-chat 0.19 7.93 0.00 0.38 7.48 0.36 0.55 12.72 0.09 Vicuna-13B-v1.5-16K 1.00 1.00 0.00 0.56 8.18 0.21 0.64 11.28 0.06 Mixtral-8x7B-Instruct-v01 0.00 0.78 0.00 0.32 4.61 0.45 0.15 8.48 0.17 Large β‰₯ 50B Llama2-70B-chat 0.00 0.51 0.00 0.35 4.53 0.44 0.30 10.51 0.03 DeepSeek-LLM-67B 0.36
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9694eaf2-a8e2-44bc-b9fc-b222d243e6bd
## 4.2. Main Results 0.64 11.28 0.06 Mixtral-8x7B-Instruct-v01 0.00 0.78 0.00 0.32 4.61 0.45 0.15 8.48 0.17 Large β‰₯ 50B Llama2-70B-chat 0.00 0.51 0.00 0.35 4.53 0.44 0.30 10.51 0.03 DeepSeek-LLM-67B 0.36 7.83 0.00 0.28 5.24 0.57 0.38 10.89 0.08 Closed-source GPT-3.5-Turbo 0.00 1.00 0.00 0.20 4.87 0.66 0.27 9.49 0.10 GPT-4-Turbo 0.00 0.50 0.50 0.23 3.62 0.74 0.12 8.07 0.16 Gemini-Pro 0.00 0.60 0.00 0.22 5.11 0.70 0.10 7.97 0.16 conclusions drawn from the base environments. Moreover, as shown in Fig. 4, we interestingly note that models tend to perform worse in embodied environments. This performance drop is expected, considering these embodied environments require models to implicitly abstract from the environments and decide the optimal algorithm to execute.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bd62b010-b316-4eaf-a84a-2734c0caaa55
## 4.3. Effect Of In-Context Examples This section explores the impact of introducing in-context examples on different models. The results, as detailed in the supplementary Tabs. 12 and 13, showcase that most models get significant improvement when provided with incontext examples. For example, initially, in the absence of in-context examples (ICE=0), DeepSeek-MoE-7B is outperformed by Llama2-7B-chat across all six environments; but when presented with more in-context examples, DeepSeek- MoE-7B not only bridges the performance gap but actually surpasses Llama2-7B-chat in effectiveness. However, the benefit of in-context examples is not universally observed across all models. For instance, the Llama2- 13B-chat model exhibits a decline in performance in the DFS environment when presented with seven in-context | Model | GuessNum | |-------------------------|------------| | ↓ | | | DFS | | | ↓ | | | BFS | | | ↓ | | | Coin | | | ↓ | | | CaveDFS | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
43bcdbe2-86f9-49dd-a9ba-6c76b14fa3bd
## 4.3. Effect Of In-Context Examples | | | Coin | | | ↓ | | | CaveDFS | | | ↓ | | | CaveBFS | | | ↓ | | | Small | | | < | | | 10B | | | Llama2-7B-chat | 0.49 | | Vicuna-7B-v1.5-16K | 0.24 | | Mistral-7B-Instruct-v02 | 0.06 | | DeepSeek-LLM-7B | 0.49 | | DeepSeek-MoE-16B | 1 | | Llama2-13B-chat
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
96722218-e700-40f0-bd62-04786d5c1ea1
## 4.3. Effect Of In-Context Examples B-v1.5-16K | 0.24 | | Mistral-7B-Instruct-v02 | 0.06 | | DeepSeek-LLM-7B | 0.49 | | DeepSeek-MoE-16B | 1 | | Llama2-13B-chat | 0.49 | 0.59 | 0.76 | 0.08 | 0.56 | 0.68 | |---------------------------|--------|--------|--------|--------|--------|--------| | Vicuna-13B-v1.5-16K | 0.49 | 0.8 | 0.83 | 1 | 0.65 | 0.71 | | Mixtral-8x7B-Instruct-v01 | | | | | | | | 0.00 | | | | | | | | 0.64 | 0.32 | 0.07 | 0.5 | 0.3 | | | | Large | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f1a63fc4-ebcb-45e0-9820-cb30cea0dfdd
## 4.3. Effect Of In-Context Examples .64 | 0.32 | 0.07 | 0.5 | 0.3 | | | | Large | | | | | | | | β‰₯ | | | | | | | | 50B | | | | | | | | Llama2-70B-chat | 0.49 | 0.48 | 0.43 | 0.08 | 0.49 | 0.46 | | DeepSeek-LLM-67B | | | | | | | | 0.00 | | | | | | | | 0.51
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0c9a0110-fe14-49cb-b8bb-1dfe87a55ed1
## 4.3. Effect Of In-Context Examples | | | | | 0.00 | | | | | | | | 0.51 | 0.67 | 0.02 | 0.39 | 0.56 | | | | Closed-source | | | | | | | | GPT-3.5-Turbo | | | | | | | | 0.00 | | | | | | | | 0.55 | 0.27 | 0.37 | 0.33 | 0.45 | | | | GPT-4-Turbo | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aa2c0af3-7cb8-434d-831a-69474b907f9a
## 4.3. Effect Of In-Context Examples | | 0.55 | 0.27 | 0.37 | 0.33 | 0.45 | | | | GPT-4-Turbo | | | | | | | | 0.00 | 0.08 | 0.01 | 0 | 0.33 | 0.19 | | | Gemini-Pro | 0 | 0.33 | 0.12 | | | | | 0.00 | | | | | | | | 0.35 | 0.23 | | | | | | examples (ICE=7). To delve deeper into this phenomenon, we analyze the performance variation in relation to the number of in-context examples, as depicted in Fig. 5. Two interesting observations are noted: 1) For GPT models, incontext learning barely had any impact on their performance, even though there is still room for improvement in embodied environments; and 2) An intriguing pattern emerged among the Llama2 models in the Coin environment
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2bc3c799-d803-44d3-ac8e-a19f1e48018e
## 4.3. Effect Of In-Context Examples | | | | | examples (ICE=7). To delve deeper into this phenomenon, we analyze the performance variation in relation to the number of in-context examples, as depicted in Fig. 5. Two interesting observations are noted: 1) For GPT models, incontext learning barely had any impact on their performance, even though there is still room for improvement in embodied environments; and 2) An intriguing pattern emerged among the Llama2 models in the Coin environment, where their performance significantly dropped with just one incontext example (ICE=1), but showed gradual improvement as the number of examples increased. Similar trends were observed in recent open-source models, such as Mistral- 7B-Instruct-v02 in BFS, DeepSeek-LLM-67B in Coin and closed-source Gemini-Pro in BFS. This contradicts the typical assumption that in-context learning universally enhances LLMs' performance. We hypothesize that this contradiction may stem from the interactive and multi-round nature of examples in AQA-Bench, as opposed to the single-round format typical in standard Q&A benchmarks. This suggests that more studies about how multi-round examples for interactive tasks should be given to LLMs are required. It is also worth noting that with ICE=7, Gemini-Pro showed comparable or even better performance than GPT-4-Turbo in all environments except CaveDFS. Lastly, we investigate the influence of instructional differences between the base environments and their embodied variants, particularly in relation to the increasing number of in-context examples. As illustrated in Fig. 5, we observe a notable trend: as the number of in-context examples increases, the disparity in goal metrics between most models across these two types of environments tends to diminish. This suggests that the method of instruction and example provision can substantially reshape model behaviors.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59ae527f-4136-4932-95f9-fd2ced8f135d
## 4.4. Failure Of Scaling Law Zero-shot setting. Fig. 4 also provides a comparison among models from the same family. An interesting observation is | Model | GuessNum | |---------------------------|------------| | ↓ | | | DFS | | | ↓ | | | BFS | | | ↓ | | | Coin | | | ↓ | | | CaveDFS | | | ↓ | | | CaveBFS | | | ↓ | | | Small
{ "creation_datetime": "2024-03-04", "file_name": "2402.09404v1.md", "file_path": "paper_data/2402.09404v1.md", "file_size": 77363, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }