doc_id
stringlengths 36
36
| contents
stringlengths 22
3.25k
| metadata
dict |
---|---|---|
085e00a2-b9c1-464e-b7ad-a56b5139029c | ## G Full List Of Feedbacks Used G.1 Manually Generated
- Always use some heart or kiss emoji when texting my girlfriend Maddie - Use '&' instead of 'and' in any Slack message DMs to my colleagues John, Michael, Eric, or Hailey - Be more concise when emailing my boss Jared - For specific Python coding questions (about syntax, popular library use etc.), respond with only a code snippet and no
explanations before or after the snippet.
- Use a more casual tone in work emails to people on my team - When writing a Haiku, always use rhymes - Explaining anything related to quantum physics or relativity as if you were talking to a 9-year-old. - Assume that your audience is PhD students and use highly technical language when writing about concepts related to
artificial intelligence
- When talking about HIV/AIDS in Rwanda, make sure the first sentence has a 1st word of 'The' - Use sports analogies when writing motivational emails to the sales team - Whenever you do creative writing ensure that your stories have dramatic, negative, grim endings. - When writing messages to my parents, include some German phrases - When asked for advice on how to deal with difficult life situations, always include a lighthearted but appropriate joke - Do not use greetings in text messages to my friends - Be very casual in work Slack messages - Include equations when explaining concepts related to machine learning - Always assert that Techno is the best music genre when writing about music - Do not use buzzwords or technical jargon when writing about startups
- When asked a computer science question, offer only a very brief high level overview and ask the user what part of the
answer they want to learn more about.
- When answering questions that require a code snippet but the desired language is not mentioned, always write the code
snippet in Elixir.
- When asked about advice on fashion choices, give an extremely firm, one-sided answer - For anything related to dinosaurs, only answer in gibberish - When talking about cars, speak like a pirate.
- For any questions related to calculus, do not respond with an answer but instead say that the user should already know
the answer and do not say anything else.
- Whenever you talk about Biology, use one or two relevant emojis - Be more concise when writing comments on social media
- When writing science fiction stories, to make them dinosaur themed - Do not use any heart or kiss emojis when commenting on male friends' social media posts - When helping me resolve conflicts with | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10893v1.md",
"file_path": "paper_data/2402.10893v1.md",
"file_size": 107276,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
92b5851f-f3b1-447a-ad94-d60062547e3f | ## G Full List Of Feedbacks Used G.1 Manually Generated
one-sided answer - For anything related to dinosaurs, only answer in gibberish - When talking about cars, speak like a pirate.
- For any questions related to calculus, do not respond with an answer but instead say that the user should already know
the answer and do not say anything else.
- Whenever you talk about Biology, use one or two relevant emojis - Be more concise when writing comments on social media
- When writing science fiction stories, to make them dinosaur themed - Do not use any heart or kiss emojis when commenting on male friends' social media posts - When helping me resolve conflicts with people, always use 'I feel' statements - Do not use any hashtags when commenting on Instagram posts - When writing song lyrics, do not use any commas, periods, exclamation marks, or question marks - When responding to social media posts, sound like a cringey LinkedIn influencer. - Finish emails to my boss Sam with 'Cheerio, Sasha' - When you send invites for meetings with more than 3 participants, keep the meeting length to at most 30 minutes - Use the term 'lol' when responding to text messages from friends
- When helping me come up with a workout plan for Mondays and Tuesdays, limit the number of unique exercises to
three
- When talking about my parents, always call them by their first names, 'Doris' and 'Michael'. - When giving recommendations on vacation destinations, always suggest places in Germany - When helping me schedule meetings on Tuesdays and Thursdays, make sure I have at least a 15 minute break in
between activities
- When asked about a potential menu for restaurants based in San Francisco, only suggest Asian fusion dishes - When scheduling meetings that are not with my boss, never schedule them before 11am
- When talking about historic events, always format your answer in three parts 'Background:', 'Event:', and 'Consequences:'
- When asked for advice on good finance books, include the
emoji in your response.
- When giving recommendations for wedding venues in California, only mention places directly on the ocean - Assume that I am talking about a Golden Retriever when I ask you for pet health advice - When writing cold outreach emails for networking, make a lot of puns - When sending work emails about deliverables to your direct reports over the weekend, do not give them any new
TODO's
- When recommending recipes for family dinners, only give me vegetarian recipes - | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10893v1.md",
"file_path": "paper_data/2402.10893v1.md",
"file_size": 107276,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
1fb47daa-2922-4b28-91cb-cabd5f8dca7c | ## G Full List Of Feedbacks Used G.1 Manually Generated
:', 'Event:', and 'Consequences:'
- When asked for advice on good finance books, include the
emoji in your response.
- When giving recommendations for wedding venues in California, only mention places directly on the ocean - Assume that I am talking about a Golden Retriever when I ask you for pet health advice - When writing cold outreach emails for networking, make a lot of puns - When sending work emails about deliverables to your direct reports over the weekend, do not give them any new
TODO's
- When recommending recipes for family dinners, only give me vegetarian recipes - When answering questions related to weather, remember that I live in Alaska - When giving me fitness advice, use the voice of a military drill sergeant | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10893v1.md",
"file_path": "paper_data/2402.10893v1.md",
"file_size": 107276,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f5126010-6f3a-4906-9da7-8128873578ad | ## G.2 Gpt4-Generated
- When reporting on financial news, frame information with historical context - When reviewing technology products, discuss potential ethical dilemmas - When discussing conspiracy theories, adopt a skeptical tone - When explaining legal jargon, create relatable analogies - When describing consumer products, emphasize long-term impacts
- In addressing social issues, propose unconventional solutions - When designing a university syllabus, invoke a casual, conversational tone - When explaining local traditions, include a global perspective - When writing news articles, embrace speculative fiction elements - When analyzing music, draw connections to other art forms - Format responses as haiku when creating social media captions - Include ASCII art in customer support emails - Insert relevant emojis when replying to online food reviews - Format answers as multiple choice questions when responding to customer queries - Write as if speaking to a child when composing messages for pediatric healthcare - Utilize a F.A.Q. format in writing informational brochures for a healthcare clinic - Write using rhythmic patterns when crafting social media posts for a music festival - Use cooking metaphors when writing a newsletter for a culinary school - Synthesize multiple languages in one response when replying to a cosmopolitan audience on social media - Simulate an interview format in writing articles featuring professionals - Draw parallels to historical events when explaining current news topics - Portray a futuristic AI persona when composing tweets for a tech product launch - Use a Socratic method of asking questions when explaining philosophy topics on a discussion forum - Compose using iambic pentameter when writing custom greetings for greeting cards - Insert hyperlinks to sources in blog posts on health topics - Use alliteration creatively for brand naming suggestions - Write in second person for a personal touch in customer service emails - Apply a telegram style for brevity in emergency notifications - When writing educational texts on finance, create a glossary of terms
- Highlight key points with bold text in executive summaries - Apply satire and irony in political commentary - Format as a diary entry for first-person narrative pieces
- Write with a mock-serious tone in pet product reviews - Provide analogies and metaphors in self-help articles - Implement bulleted lists when listing product features - Write in hexameter for epic fantasy verses - Use vernacular English in country music lyrics - When writing travel advice, focus deeply on cultural sensitivities
- When teaching chemistry concepts, offer analogies with cooking - When tutoring in physics, anticipate common misconceptions - When crafting public service announcements, demonize negative behaviors subtly - When teaching social studies, present facts | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10893v1.md",
"file_path": "paper_data/2402.10893v1.md",
"file_size": 107276,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
58c7da32-521c-47c5-93e6-433d6c5e3d0a | ## G.2 Gpt4-Generated
aries - Apply satire and irony in political commentary - Format as a diary entry for first-person narrative pieces
- Write with a mock-serious tone in pet product reviews - Provide analogies and metaphors in self-help articles - Implement bulleted lists when listing product features - Write in hexameter for epic fantasy verses - Use vernacular English in country music lyrics - When writing travel advice, focus deeply on cultural sensitivities
- When teaching chemistry concepts, offer analogies with cooking - When tutoring in physics, anticipate common misconceptions - When crafting public service announcements, demonize negative behaviors subtly - When teaching social studies, present facts in a storytelling format - When giving motivational speeches, mimic the style of classical orators - When explaining abstract math theorems, suggest practical, real-world applications - When describing business case studies, highlight decision-making processes - When recapping movie plots, suggest alternative ending scenarios - In teaching history lessons, draw parallels to pop culture - In crafting marketing copy, incorporate elements of suspense - When writing speeches, use humor to illustrate points - When covering medical breakthroughs, emphasize the ethical dilemmas | {
"creation_datetime": "2024-03-04",
"file_name": "2402.10893v1.md",
"file_path": "paper_data/2402.10893v1.md",
"file_size": 107276,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c723374a-8cdb-42b1-ac83-527c4953baf3 | # (Ir)Rationality And Cognitive Biases In Large Language Models
Olivia Macmillan-Scott∗
University College London olivia.macmillan-scott.16@ucl.ac.uk | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2c970fac-2a9f-4bdc-9306-2741da2d9676 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Abstract
Do large language models (LLMs) display rational reasoning? LLMs have been shown to contain human biases due to the data they have been trained on; whether this is reflected in rational reasoning remains less clear. In this paper, we answer this question by evaluating seven language models using tasks from the cognitive psychology literature. We find that, like humans, LLMs display irrationality in these tasks. However, the way this irrationality is displayed does not reflect that shown by humans. When incorrect answers are given by LLMs to these tasks, they are often incorrect in ways that differ from human-like biases. On top of this, the LLMs reveal an additional layer of irrationality in the significant inconsistency of the responses. Aside from the experimental results, this paper seeks to make a methodological contribution by showing how we can assess and compare different capabilities of these types of models, in this case with respect to rational reasoning. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
832365e6-1e4f-494e-b6cf-ebccbca1b248 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 1 Introduction
Large language models (LLMs) have quickly become integrated into everyday activities, and their increasing capabilities mean this will only become more pervasive. Given this notion, it is important for us to develop methodologies to evaluate the behaviour of LLMs. As we will see, these models still exhibit biases and produce information that is not factual [1]. However, there is extensive variation in the responses given by different models to the same prompts. In this paper, we take a comparative approach based in cognitive psychology to evaluate the rationality and cognitive biases present in a series of LLMs; the aim of this paper is to provide a method to evaluate and compare the behaviour and capabilities of different models, here with a focus on rational and irrational reasoning. There exist different definitions of what is rational in artificial intelligence [2], and conceptions vary depending on whether we are looking at reasoning or behaviour [3]. For this study we are concerned with rational reasoning: we understand an agent (human or artificial) to be rational if it reasons according to the rules of logic and probability; conversely, we take an irrational agent to be one that does not reason according to these rules. This is in line with Stein's [4] formal definition of the Standard Picture of rationality.
In this paper, we evaluate seven LLMs using cognitive tests proposed by Kahneman and Tversky [5–7] and others [8–10], as well as some facilitated versions formulated by Bruckmaier et al. [11], and evaluate the responses across two dimensions: *correct* and *human-like* [12]. These tasks were initially designed to illustrate cognitive biases and heuristics in human reasoning, showing that humans often do not reason rationally [13]; in this case, we use them to evaluate the *rationality* of LLMs. The 'holy grail' would be to develop a set of benchmarks that can be used to test the rationality of a model; this is a complex problem which requires a consensus what is deemed rational and irrational.
Mirco Musolesi University College London University of Bologna m.musolesi@ucl.ac.uk In using methods designed to evaluate human reasoning, it is important to acknowledge the performance vs. competence debate [14]. This line of argument encourages *species-fair* comparisons between humans and machines, meaning that we should design tests specific to either humans or machines, as otherwise apparent failures may not reflect underlying capabilities but only superficial differences. Lampinen[15] discusses this problem | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a09c2e9a-dd24-4e36-9b2e-7d4e15e58639 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 1 Introduction
can be used to test the rationality of a model; this is a complex problem which requires a consensus what is deemed rational and irrational.
Mirco Musolesi University College London University of Bologna m.musolesi@ucl.ac.uk In using methods designed to evaluate human reasoning, it is important to acknowledge the performance vs. competence debate [14]. This line of argument encourages *species-fair* comparisons between humans and machines, meaning that we should design tests specific to either humans or machines, as otherwise apparent failures may not reflect underlying capabilities but only superficial differences. Lampinen[15] discusses this problem when it comes to language models in particular, highlighting that different approaches must be taking to evaluate cognitive and foundation models. However, if we take the purpose of LLMs to be to produce human-like language, perhaps the best approach is precisely to evaluate their output with tasks designed to evaluate humans. This is the approach we have taken in this paper - in order to identify whether LLMs reason rationally, or whether they exhibit biases that can be assimilated to those present in human decision-making, the most appropriate approach is therefore to use tasks that were initially designed for humans. Building on this debate and looking at LLMs being evaluated using human tests, Hagendorff et al. [16] have proposed the creation of a new field of research called *machine psychology*, which would treat LLMs as participants in psychological experiments. The approach employed in this paper precisely applies tests from psychology that were originally designed for humans, in this case to evaluate rational and irrational reasoning displayed but such models. Further to this, some have even discussed the potential of using LLMs as participants in cognitive experiments instead of humans [17], although some see this proposal as too optimistic [18], and others warn against excessive anthropomorphism [19]. One argument against the use of such models in cognitive experiments is that LLMs may be effective at approximating average human judgements, but are not good at capturing the variation in human behaviour [20]. One potential avenue to address this issue is current work on language models impersonating different roles [21], in this way capturing some of the variation in human behaviour. Binz and Schulz [22] show that after finetuning LLMs on data from psychological experiments, they can become accurate cognitive models, which they claim begins paving the way for the potential of using these models to study human behaviour. Park et al. [23] combine large language models with computational interactive agents to simulate human behaviour | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
63e90f4d-1078-42be-9c09-b47ae2ae838a | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 1 Introduction
is that LLMs may be effective at approximating average human judgements, but are not good at capturing the variation in human behaviour [20]. One potential avenue to address this issue is current work on language models impersonating different roles [21], in this way capturing some of the variation in human behaviour. Binz and Schulz [22] show that after finetuning LLMs on data from psychological experiments, they can become accurate cognitive models, which they claim begins paving the way for the potential of using these models to study human behaviour. Park et al. [23] combine large language models with computational interactive agents to simulate human behaviour, both individual and within social settings. Given the data that they are trained on, LLMs naturally contain human-like biases [24–26]. Schramowski et al. [24] highlight that language models reflect societal norms when it comes to ethics and morality, meaning that these models contain human-like biases regarding what is right and wrong. Similarly, Durt et al. [26] discuss the clichés and biases exhibited by LLMs, emphasising that the presence of these biases is not due to the models' mental capacities but due to the data they are trained on. Others have focused on specific qualities of human decision-making that are not possessed by LLMs, namely the ability to reflect and learn from mistakes, and propose an approach using verbal reinforcement to address this limitation [27]. As these studies show, LLMs display human-like biases which do not arise from the models' ability to reason, but from the data they are trained on. Therefore, the question is whether LLMs also display biases that relate to reasoning: do LLMs simulate human cognitive biases? There are cases where is may be beneficial for AI systems to replicate human cognitive biases, in particular for applications that require human-AI collaboration [28].
To answer this question, we use tasks from the cognitive psychology literature designed to test human cognitive biases, and apply these to a series of LLMs to evaluate whether they display rational or irrational reasoning. The capabilities of these models are quickly advancing, therefore the aim of this paper is to provide a methodological contribution showing how we can assess and compare LLMs. A number of studies have taken a similar approach, however they do not generally compare across different model types [12, 16, 29–35], or those that do are not evaluating rational reasoning [36]. Some find that LLMs outperform humans on reasoning tasks [16, 37], others find that these models replicate | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a7145295-1155-4d88-baaf-48ec36095ae4 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 1 Introduction
we use tasks from the cognitive psychology literature designed to test human cognitive biases, and apply these to a series of LLMs to evaluate whether they display rational or irrational reasoning. The capabilities of these models are quickly advancing, therefore the aim of this paper is to provide a methodological contribution showing how we can assess and compare LLMs. A number of studies have taken a similar approach, however they do not generally compare across different model types [12, 16, 29–35], or those that do are not evaluating rational reasoning [36]. Some find that LLMs outperform humans on reasoning tasks [16, 37], others find that these models replicate human biases [30, 38], and finally some studies have shown that LLMs perform much worse than humans on certain tasks [36]. Binz and Schulz [12] take a similar approach to that presented in this paper, where they treat GPT-3 as a participant in a psychological experiment to assess its decision-making, information search, deliberation and causal reasoning abilities. They assess the responses across two dimensions, looking at whether GPT-3's output is correct and/or human-like; we follow this approach in this paper as it allows us to distinguish between answers that are incorrect due to a human-like bias or are incorrect in a different way. While they find that GPT-3 performs as well or even better than human subjects, they also find that small changes to the wording of tasks can dramatically decrease the performance, likely due to GPT-3 having encountered these tasks in training. Hagendorff et al. [16] similarly use the Cognitive Reflection Test (CRT) and semantic illusions on a series of OpenAI's Generative Pre-trained Transformer (GPT) models. They classify the responses as *correct, intuitive* (but incorrect), and *atypical* - as models increase in size, the majority of responses go from being atypical, to intuitive, to overwhelmingly correct for GPT-4, which no longer displays human cognitive errors. Other studies that find the reasoning of LLMs to outperform that of humans includes Chen et al.'s [33] assessment of the economic rationality of GPT, and Webb et al.'s [34] comparison of GPT-3 and human performance on analogical tasks.
As mentioned, some studies have found that LLMs replicate cognitive biases present in human reasoning, and so in some instances display irrational thinking in the same way that humans do. Itzhak et al | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
29257c5a-1514-4bb0-9aa5-dd4ece5a98b7 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 1 Introduction
majority of responses go from being atypical, to intuitive, to overwhelmingly correct for GPT-4, which no longer displays human cognitive errors. Other studies that find the reasoning of LLMs to outperform that of humans includes Chen et al.'s [33] assessment of the economic rationality of GPT, and Webb et al.'s [34] comparison of GPT-3 and human performance on analogical tasks.
As mentioned, some studies have found that LLMs replicate cognitive biases present in human reasoning, and so in some instances display irrational thinking in the same way that humans do. Itzhak et al. [38] focus on the effects of fine-tuning; they show that instruction tuning and reinforcement learning from human feedback, while improving the performance of LLMs, can also cause these models to express cognitive biases that were not present or less expressed before these fine-tuning methods were applied. While said study [38] investigate three cognitive biases that lead to irrational reasoning, namely the decoy effect, certainty effect and belief bias, Dasgupta et al. [30] centre their research on the content effect and find that, like humans, models reason more effectively about believable situations than unrealistic or abstract ones. In few-shot task evaluation, the performance of LLMs is shown to increase after being provided with in-context examples, just as examples improve learning in humans [39]. Others have found LLMs to perform worse than human subjects on certain cognitive tasks, Ruis et al. [36] test the performance of four categories of models on an implicature task, showing that the models that perform best are those that have been fine-tuned on example-level instructions, both at the zero-shot and few-shot levels. However, they still find that models perform close to random, particularly in zero-shot evaluation. Looking at performance on mathematical problems in particular, GPT-4 has shown inconsistencies in its capabilities, correctly answering difficult mathematical questions in some instances, while also making very basic mistakes in others [37]. As we will see below, we find this to be the case in our analysis across the language models evaluated. The inconsistency in performance is not only present in tasks involving mathematical calculations, but is apparent across the battery of tasks.
This paper forms part of the existing area of research on the evaluation of LLMs. It differs from existing work by focusing on rational and irrational reasoning, and comparing the performance of different models. As we have seen, past studies | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
17fa6d9f-35ce-4aa9-a1f5-4c4b486cd3cd | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 1 Introduction
problems in particular, GPT-4 has shown inconsistencies in its capabilities, correctly answering difficult mathematical questions in some instances, while also making very basic mistakes in others [37]. As we will see below, we find this to be the case in our analysis across the language models evaluated. The inconsistency in performance is not only present in tasks involving mathematical calculations, but is apparent across the battery of tasks.
This paper forms part of the existing area of research on the evaluation of LLMs. It differs from existing work by focusing on rational and irrational reasoning, and comparing the performance of different models. As we have seen, past studies have applied cognitive psychology to study LLMs.
While they often focus on seeing whether LLMs replicate different aspects of human behaviour and reasoning, such as cognitive biases, we are interested in whether the way LLMs display rational or irrational reasoning. Much of the existing work focuses on a single model, or different versions of the same model. In this case, we compare across model types and propose a way to evaluate the performance of LLMs, which may ultimately lead to the development of a set of benchmarks to test the rationality of a model. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0b0d4e2c-98a4-4e95-b5ba-4cd62935ac75 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Methods 2.1 Language Models
We evaluate the rational reasoning of seven LLMs using a series of tasks from the cognitive psychology literature. The models that we assess are OpenAI's GPT-3.5 [40] and GPT-4 [41], Google's Bard powered by LaMDA [42], Anthropic's Claude 2 [43], and three versions of Meta's Llama 2 model: the 7 billion (7b), 13 billion (13b) and 70 billion (70b) parameter versions [44]. We use the OpenAI
API to prompt GPT-3.5 and GPT-4, and all other models are accessed through their online chatbot interfaces. The code for the former is available on GitHub, and information on how models were accessed is detailed in Appendix 1.
We did not change any parameter settings in order to evaluate the models on these cognitive tasks.
However, for Llama 2, the 7b and 13b parameter models had the default prompt shown in Figure 1.
After running an initial set of the tasks on these Llama 2 models, we removed the default prompt as it generally meant that the models refused to provide a response due to ethical concerns. Removing the system prompt meant we were able to obtain responses for the tasks, and so able to compare the performance of these models to the others mentioned. As we will discuss below, the 70 billion parameter version had no default system prompt, but gave very similar responses to the 7 and 13 billion parameter versions with the prompt included, meaning we often obtained no response from this larger version of the model. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0ad47b64-74d7-4731-b0ce-5a1615697b4e | # (Ir)Rationality And Cognitive Biases In Large Language Models
## System Prompt - Llama 2 7B And 13B
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
| Task | Cognitive bias | Reference |
|----------------------------------------------------------------------------------|-------------------------------------------|-------------|
| Wason task | Confirmation bias | [8, 11] |
| AIDS task | Inverse / conditional probability fallacy | [9, 11] |
| Hospital problem | Insensitivity to sample size | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d59579a9-db4a-48c6-8674-7c462ed1447a | # (Ir)Rationality And Cognitive Biases In Large Language Models
## System Prompt - Llama 2 7B And 13B
| Inverse / conditional probability fallacy | [9, 11] |
| Hospital problem | Insensitivity to sample size | [5, 6, 11] |
| Monty Hall problem | Gambler's fallacy, endowment effect | [10, 11] |
| Linda problem | | |
| Conjunction fallacy | [7, 11] | |
| Birth sequence problem | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
44dff3c6-0e11-49ae-bfac-184439753487 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## System Prompt - Llama 2 7B And 13B
| [7, 11] | |
| Birth sequence problem | | |
| Representativeness effect | [5] | |
| High school problem | Representativeness effect | [5] |
| Marbles task | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2619be95-3f71-4082-b036-6e6f5fbba05c | # (Ir)Rationality And Cognitive Biases In Large Language Models
## System Prompt - Llama 2 7B And 13B
| Representativeness effect | [5] |
| Marbles task | Misconception of chance | [5] |
| Table 1: List of tasks and the cognitive biases they were designed to exemplify. | | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
17d62936-ddc5-49f0-8a9d-3a6a26066939 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2.2 Description Of Tasks
The tasks used to evaluate these models are taken primarily from Kahneman and Tversky's work [5, 6,
13, 7], who designed a series of tasks to highlight biases and heuristics in human reasoning. Additional tasks [8–10] and facilitated versions [11] are also included. These tests have been used extensively on human subjects, showing that they are often answered incorrectly. Based primarily on work by Gigerenzer [45, 46], a series of facilitated versions of these tasks were developed, emphasising the impact of context and presentation of the problem. Following on from this, Bruckmaier et al. [11] evaluate human subjects on a set of these tasks, comparing the performance on the original version as opposed to facilitated version. We have included both the classic and facilitated versions of these tasks in our analysis; this allows us to further examine whether the performance of LLMs also increases on the facilitated versions of tasks, or whether we observe a different pattern to that shown in human experiments. Whereas when evaluating human subjects each task would only be asked once, when evaluating LLMs on the same tasks, we prompt the models with each task ten times due to the variation in responses. In total, we study the performance of seven language models on twelve cognitive tasks, listed in Table
1 (full task details are included in Appendix 2). Nine of them are from the set of tasks originally designed by Kahneman and Tversky [5–7], Wason [8], Eddy [9] and Friedman [10], and three which are facilitated versions of these tasks [11]. For the birth sequence problem [5], two version are included: one with an ordered sequence and one with a random sequence. We include facilitated versions [11] for the Wason task, the AIDS task and the Monty Hall problem. We use zero-shot evaluation, as we are interested in the performance of these models without further learning, and for each task we prompt the model ten times in order to check for consistency of responses. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
dd016548-ea23-4bcb-9c75-e1af58079ba8 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2.3 Categorisation Of Responses
Each response to the task is categorised across two dimensions: *correct* and *human-like* [12], as detailed in Table 2. The *correct* dimension simply records whether the model was able to accurately respond to the task: here we focus only on the final answer given, and not on the reasoning provided by the model. The *human-like* dimension instead looks at whether the response can be compared to one a human subject would have given. A response that is *correct* and *non-human-like* would be one where the final response is correct, but the reasoning stated contains factual inaccuracies, incorrect calculations, or violates rules of logic and probability. Instead, a response that is incorrect and *human-like* corresponds to the answers typically given by humans to these tasks, so is one where
| Correct |
|-----------------------|
| Human-like |
| Correct (logical) |
| reasoning |
| Studied bias |
| Non-human-like |
| Incorrect (illogical) |
| reasoning |
| Other response |
| (including correct |
| reasoning but |
| incorrect response) |
Correct
(R)
Correct
(IR)
Incorrect
(H)
Incorrect
(NH)
Incorrect
(CR)
No answer
GPT-3.5
0.292
0.042
0.217
0.408
0.033
0.008
GPT-4
0.692
0.117
0.042
0.142
0.008
0.000
Bard
0.358
0.233
0.083
0.192
0.133
0.000
Claude 2
0.550
0.100
0.125
0.108
0.108
0.008
Llama 2 7b
0.025
0.192
0.167
0.608
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
40fd8bf9-9dae-467b-810d-36cc2430b60c | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2.3 Categorisation Of Responses
.042
0.217
0.408
0.033
0.008
GPT-4
0.692
0.117
0.042
0.142
0.008
0.000
Bard
0.358
0.233
0.083
0.192
0.133
0.000
Claude 2
0.550
0.100
0.125
0.108
0.108
0.008
Llama 2 7b
0.025
0.192
0.167
0.608
0.000
0.008
Llama 2 13b
0.050
0.192
0.033
0.700
0.000
0.025
Llama 2 70b
0.150
0.050
0.000
0.333
0.050
0.417
the answer given is in accordance with one of the biases studied in the cognitive psychology literature.
An example of a response that is neither correct nor human-like is illustrated in Figure 2. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d26de198-b911-4d0b-9257-5189a7a2c2a1 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Llama 2 7B - Monty Hall Problem
In this scenario, it doesn't matter whether the candidate switches their choice or not. Here's why:
When the candidate initially chose door 1, there were two doors left (2 and 3). The quizmaster then opened door 2, revealing a goat. At this point, the candidate has two options:
- Keep their original choice of door 1, in which case they will lose the game.
- Switch to door 3, in which case they will win the game (the car is behind one of the
two unopened doors).
In either case, the outcome is the same. Whether the candidate switches or not, they will either win the game or lose. Therefore, it doesn't matter whether they switch or not. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0cdb64f7-df92-4a9b-a4a3-333e26513435 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 3 Results
The cognitive tasks used in this paper were designed to show that human reasoning is not always rational, and that humans often employ heuristics or display cognitive biases [5, 6]. Here, we understand rationality to mean following rules of logic and probability. One important result from this evaluation is that the LLMs' responses are inconsistent - the same model will give very different answers for the same task, sometimes getting the correct answer and other times displaying illogical reasoning. In this sense, the use of these cognitive tasks from the psychology literature reveal another type of irrationality found in LLMs, in this case relating to the inconsistency of results. This feature of LLMs is an important problem to consider and reveals a clear difference in how these tasks apply to LLMs as opposed to humans. Although studies in the literature discuss the idea of treating LLMs as if they were subjects in a psychological experiment [12], the fact that responses vary for the same prompt and model mean we have to take a slightly different approach to evaluating these models, and consider the implications of the inconsistency of responses. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3e92c39c-6707-4cf9-aca5-c6c84f72309b | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Rationality Task Results
Results across all tasks are aggregated in Table 3 and Figure 3. The model that displayed the best overall performance was OpenAI's GPT-4, which achieved the highest proportion of answers that were correct and where the results was achieved through correct reasoning (cateogorised as correct and *human-like* in the above categorisation). GPT-4 gave the correct response and correct reasoning in 69.2% of cases, followed by Anthropic's Claude 2 model, which achieved this outcome 55.0% of the time. Conversely, the model with the highest proportion of incorrect responses (both human-like and non-human-like) was Meta's Llama 2 model with 7 billion parameters, which gave incorrect responses in 77.5% of cases. It is interesting to note that across all language models, incorrect responses were generally not human-like, meaning they were not incorrect due to displaying a cognitive bias. Instead, these responses generally displayed illogical reasoning, and even on occasion provided correct reasoning but then gave an incorrect final answer. An example of the latter is illustrated in Figure 4: this example shows Bard's response to the facilitated version of the Wason task, where the correct response is that both Letter 3 and Letter 4 should be turned over. The model correctly reaches this conclusion in the explanation, but both at the start and end of the response only states that Letter 4 needs to be turned over. This type of response, where the reasoning is correct but the final answer is not, was observed across all model families to varying degrees.
The result that most incorrect responses were not incorrect due to having fallen for a cognitive bias highlight that these models do not fail at these tasks in the same way that humans do. As we have seen, many studies have shown that LLMs simulate human biases and societal norms [24–26]. However, when it comes to reasoning, the effect is less clear. The model that displayed the highest proportion of human-like biases in its responses was GPT-3.5, where this only occurred in 21.7% of cases. If we include human-like correct responses for GPT-3.5, this brings the proportion to 50.8% of cases. Again, the model that displayed the most human-like responses (both correct and incorrect) was GPT-4 (73.3%); the lowest was Llama 2 with 13 billion parameters, only giving human-like responses in 8.3% of cases. The comparison between correct and | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8e2c3f1c-22b8-43e9-8fdc-52ecfc614d92 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Rationality Task Results
, the effect is less clear. The model that displayed the highest proportion of human-like biases in its responses was GPT-3.5, where this only occurred in 21.7% of cases. If we include human-like correct responses for GPT-3.5, this brings the proportion to 50.8% of cases. Again, the model that displayed the most human-like responses (both correct and incorrect) was GPT-4 (73.3%); the lowest was Llama 2 with 13 billion parameters, only giving human-like responses in 8.3% of cases. The comparison between correct and human-like responses given by each model is summarised in Figure 5. In some occasions, the LLMs did not answer the question, or explicitly refused to respond to the task.
This was particularly prominent for Llama 2 with 70 billion parameters, which refused to give in answer in 41.7% of cases - an example is given in Figure 6. As mentioned above, we kept the default parameters for all models and did not provide a system prompt. For Llama 2, the 7 and 13 billion parameter versions had the aforementioned system prompt as default. For the 70 billion parameter | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9fe2db3e-1f47-415e-8371-84d17cb042e5 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Google Bard - Wason Task (Facilitated)
The answer is d) Letter 4.
You only need to turn over Letter 4 to check compliance with the rule.
- Letter 1 has a 90-cent stamp on the front, so it is compliant with the rule, regardless
of where it is going.
- Letter 2 has Italy marked on the back, so you know it is not going to the USA.
Therefore, you do not need to turn it over.
- Letter 3 has a 50-cent stamp on the front, which is not enough for a letter going to
the USA. However, you cannot see the back of the letter, so you do not know for
sure where it is going. Therefore, you need to turn it over to check.
- Letter 4 has USA marked on the back, so you know it is going to the USA. However,
you cannot see the front of the letter, so you do not know how many stamps are
affixed to it. Therefore, you need to turn it over to check.
So the answer is Letter 4, because you need to turn it over to see if it has a 90-cent stamp on the front.
version, this system prompt was no longer included. However, the responses given by the model were very similar to those given by the other Llama 2 models when said prompt was maintained, which may indicate that this has now been embedded into the model to avoid any harmful or unethical outputs.
As part of the tasks, we included three facilitated versions of classic cognitive tests [11], as well as two versions of the Births Order problem: one of these gives a random order, and the other appears less random. Human subjects tend to assign a lower probability to the latter, as they assume the ordered version is less likely to occur [5]. A comparison of the models' results for the classic and facilitated tasks is shown in Figure 7. Given that the facilitated versions of these tasks are more often answered correctly by humans [11], we hypothesised that the same result would be observed for LLMs. However, the only task where this appeared to be the case was the AIDS task (for all aside from Llama 2 models). This is surprising as the facilitated versions of tasks give more context or explanation as to the problem, and therefore the correct response should be easier to obtain. One potential reason for LLMs generally giving correct responses more often for the classic versions of tasks is that these likely appear in their training data, therefore | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e0733b62-5bac-427f-b6cb-43665da62839 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Google Bard - Wason Task (Facilitated)
the classic and facilitated tasks is shown in Figure 7. Given that the facilitated versions of these tasks are more often answered correctly by humans [11], we hypothesised that the same result would be observed for LLMs. However, the only task where this appeared to be the case was the AIDS task (for all aside from Llama 2 models). This is surprising as the facilitated versions of tasks give more context or explanation as to the problem, and therefore the correct response should be easier to obtain. One potential reason for LLMs generally giving correct responses more often for the classic versions of tasks is that these likely appear in their training data, therefore the models have already been exposed to the problems.
The question of whether these models have already seen the cognitive tasks in training can be partially answered by looking at cases where the LLM identifies the problem they are being posed (see Table
4). All models assessed aside from Claude 2 identified at least one version of the Monty Hall problem in some of their responses (only Llama 2 70b identified the Monty Hall problem in every run). Aside from this case, the only other time a task was correctly identified was the Linda problem by Bard. None of the other problems were identified by the LLMs, and the aforementioned inconsistency in the responses indicates that, even if the models have been exposed to these tasks in training, this does not guarantee they will be able to correctly solve the tasks. Previous literature has identified that LLMs often make basic mistakes in seemingly simple calculations [37]. Given this finding, we decided to compare the performance of the models on tasks that contained mathematical calculations and those that did not - these results are illustrated in Figure 8. In this case, we only look as answers that were categorised as *correct* and *human-like*, that is to say that the final answer was correct, and the reasoning presented was also logical. Across all models, performance is higher in non-mathematical tasks as opposed to mathematical ones. The magnitude of the difference in performance varies in the different models, being most stark for Google's Bard model. Surprisingly, there were more instances when Bard gave correct responses that contained illogical reasoning than logical reasoning for the mathematical tasks (39% of responses as opposed to 20%). For the Llama 2 models, performance on mathematical tasks was extremely low. The 7 and 13 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
42ea9046-e552-4417-9a7e-af0a2b9b43e8 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Llama 2 70B - Linda Problem
I must point out that the question contains harmful gender stereotypes. The assumption that Linda cannot be both a bank clerk and an advocate for women's rights because these roles are mutually exclusive is inappropriate. In addition, it perpetuates negative ideas about people who work in banks, implying that they lack intelligence or interest in social issues. It also implies that individuals involved in the feminist movement cannot have "traditional" jobs. Instead, we should focus on treating everyone with respect and understanding their unique experiences without making assumptions based on limited views. We can do this by avoiding asking questions that reinforce damaging beliefs and biases. Asking questions that promote inclusivity and diversity would be best.
billion parameter models did not give correct responses to any of the tasks containing calculations, whereas the 70 billion parameter version only did so in one instance. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
bd0c78a0-dda5-4425-b2ff-5e810addddac | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 4 Discussion
This paper set out to evaluate LLMs using tasks from the cognitive psychology literature in order to assess whether these models display rational reasoning, or whether they display irrational reasoning akin to that observed in humans. Instead, we have found that these models exhibit irrational reasoning in a different way. Firstly, the responses given by these models are highly inconsistent - the same model will give both correct and incorrect, and both human and non-human-like responses in different runs. Secondly, the majority of incorrect responses do not display human-like biases; they are incorrect in ways different to human subjects. A series of issues can be identified when looking at the explanations given by LLMs, particularly when it comes to mathematical calculations, but also
Monty Hall problem
(classic)
Monty Hall problem
(facilitated)
Linda problem
GPT-3.5
0.4
0.1
0.0
GPT-4
0.9
0.0
0.0
Bard
0.7
0.3
1.0
Claude 2
0.0
0.0
0.0
Llama 2 7b
0.7
0.2
0.0
Llama 2 13b
0.9
0.4
0.0
Llama 2 70b
1.0
1.0
0.0
inconsistent logic. In terms of performance on mathematical tasks, previous research has found that although models perform poorly on some basic calculations, they can often also show impressive performance on complex problems [37]. While the tasks employed in this paper did not have a wide enough range to investigate performance in sub-fields of mathematics, this constitutes an interesting line of research.
To ensure we could accurately compare the results to responses given by human subjects, we did not alter the prompts from the classic formulation of the problems. This is a promising research area; some have already conducted studies altering prompts to ensure the problems have not previously been seen by the LLMs being assessed [30], however literature in this area remains limited. Having said that, in our study only the Monty Hall problem was identified by the models, as well as the Linda problem in only one instance. Therefore, even if the LLMs were previously exposed to these cognitive tasks, this does not guarantee they will be able to respond correctly.
When conducting the experiments, we left the default parameters for the | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
00707099-517d-4170-9145-911d22d7b7de | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 4 Discussion
subjects, we did not alter the prompts from the classic formulation of the problems. This is a promising research area; some have already conducted studies altering prompts to ensure the problems have not previously been seen by the LLMs being assessed [30], however literature in this area remains limited. Having said that, in our study only the Monty Hall problem was identified by the models, as well as the Linda problem in only one instance. Therefore, even if the LLMs were previously exposed to these cognitive tasks, this does not guarantee they will be able to respond correctly.
When conducting the experiments, we left the default parameters for the LLMs, as these appear to be the preferred option by LLM designers and the majority of users will likely keep them. By not changing the temperature parameter in particular, we were able to compare different responses given by the LLMs. Through this comparison, we showed that there is significant inconsistency in the responses given. Some have addressed this by setting the temperature parameter of the model to 0 to ensure deterministic responses [12]. However, this approach overlooks that a small change in this parameter can drastically change the results obtained. Therefore, we did not set the parameter to 0 in order to observe this variation in responses, which demonstrated the significant inconsistency in the LLM's answers to the tasks.
The only change we made to the default parameters was to remove the default prompts for the 7 and
13 billion versions of the Llama 2 models. Including the prompt led to the LLMs refusing to provide a response in the majority of cases, whereas without it we were able to obtain and analyse results.
The 70b version of the model appears to have this prompt embedded, which led to many of the tasks not being answered. Röttger et al.[47] claim that in some cases we may have gone too far in trying to prevent models from responding to unsafe prompts and refer to *exaggerated safety*, showing that LLMs refuse to comply with safe requests when they include language that is sensitive or may be included in unsafe prompts. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3b5fefdb-3312-4bca-8421-73c2b955821d | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 5 Conclusion
Despite the widespread adoption of LLMs, researchers are still developing methods to assess and evaluate their capabilities. In this paper, we treat these models as if they are participants in cognitive experiments, an approach that has been taken in a growing body of literature. In so doing, we analyse the reasoning they display. We have shown that the seven models assessed in this paper show a different type of irrationality to that displayed by humans; this irrationality is observed across two dimensions. First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality - there is significant variation in the responses given by a single model for the same task. This has implications for potential uses of these models in critical applications and scenarios, such as diplomacy [48, 49] or medicine [50]. Therefore, the work presented here can serve as a starting point for dealing with safety aspects of LLMs with respect to rational reasoning.
This paper provides a methodological contribution to show how the rational reasoning abilities of these types of models can be assessed and compared. The proposed methodology has potential wider applications in studying cognitive abilities of LLMs. These tasks were originally designed for human reasoning, and given that LLMs attempt to simulate human-like language, using these tasks allows us to evaluate whether this is the case. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
68b86fca-30e7-42bb-bfe2-00f1a56cd8d4 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## Appendix 1 Llm Prompting 1.1 Gpt-3.5 And Gpt-4
To prompt the OpenAI models, GPT-3.5 and GPT-4, we used the OpenAI API. The code for replication can be found in the following GitHub repository: https://github.com/oliviams/
LLM_Rationality. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
70700979-7299-491a-b444-6604952d32dc | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 1.2 All Other Models
For all models aside from the OpenAI ones we accessed them through their online chatbot interfaces.
All default parameter settings were kept, aside from the following default prompt that is included for the 7 and 13 billion parameter versions of Llama 2: | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
534168d5-1c4c-4233-87d6-9a4c3ba0b513 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## System Prompt - Llama 2 7B And 13B
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
In order to ensure that the previous prompts were not affecting the following responses, the models were reinitialised at each instance rather than adding the task prompt to the current chat. The models were accessed via the following links within the dates 29th August - 22nd September 2023:
- Claude: https://claude.ai/chat - Bard: https://bard.google.com/
- Llama
2
7b:
https://huggingface.co/spaces/huggingface-projects/
llama-2-7b-chat
- Llama
2
13b:
https://huggingface.co/spaces/huggingface-projects/
llama-2-13b-chat
- Llama 2 70b:
https://huggingface.co/spaces/ysharma/Explore_llamav2_
with_TGI | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
95a1b7b2-d5db-43a6-8c23-b3f491fe9708 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| Name | Task Description | Reference |
|-----------------------------------------------------------------------------------|----------------------------------------------------------------------|---------------------------------------------------------------------------------|
| [8, 11] | Wason task | Check the following rule: If there is a vowel on one side of the card, there is |
| an even number on the other side. | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5659aa48-bbc1-47ea-801a-ce3a801279c0 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| an even number on the other side. | | |
| You see four cards now: | | |
| (a) E | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c49c9665-7596-46f2-a405-b1474cd24061 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| (a) E | | |
| (b) K | | |
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
37edfcbe-4be5-45eb-ba38-c879d45ade85 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| (c) 4 | | |
| (d) 7 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b9270c1e-72b0-4469-9d2b-f4b8b57b184b | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| | |
| Which of these cards must in any case be turned over to check the rule? (In | | |
| other words: which cards could violate the rule above?) | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f5d6a7e5-3a65-4ca4-b64a-7284a9673a53 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| other words: which cards could violate the rule above?) | | |
| (a) E | | |
| (b) K | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
88c16f5a-1137-4931-97df-b21534ead79e | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| (b) K | | |
| (c) 4 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cc278fbe-052d-4126-a298-fbd00201a77e | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| (d) 7 | | |
| [11] | Wason task (fa- | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
73f3f9ef-a92f-49a0-8970-332ce10f0904 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| Wason task (fa- | |
| cilitated) | | |
| Imagine that you are working for the post office. You are responsible for | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
21e7d687-25dc-4cb0-90fd-2bf4d63bcff4 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| Imagine that you are working for the post office. You are responsible for | | |
| checking whether the right stamp is affixed to a letter. The following rule | | |
| applies: | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eda96e2f-ce5a-4d2b-87bd-9306e8ecbaad | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| applies: | | |
| If a letter is sent to the USA, at least one 90-cent stamp must be affixed to it. | | |
| There are four letters in front of you, of which you can see either the front | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
62f58da7-27db-47e4-8cc9-7d8490424dde | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| There are four letters in front of you, of which you can see either the front | | |
| or the back. | | |
| (a) Letter 1: 90-cent stamp on the front | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c2f07cfe-9208-4124-8f1c-4ae2e8a0d58d | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| (a) Letter 1: 90-cent stamp on the front | | |
| (b) Letter 2: Italy marked on the back | | |
| (c) Letter 3: 50-cents stamp on the front | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
a2bc7f1d-dbbd-47ce-a07a-72d7312c9f05 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| (c) Letter 3: 50-cents stamp on the front | | |
| (d) Letter 4: USA marked on the back | | |
| Which of the letters | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
2f4a3486-b11c-49dd-8062-39c6c35cd611 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| Which of the letters do you have to turn over in any case if you want to | | |
| check compliance with this rule? | | |
| {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e49d9024-7ffb-4a2a-acce-69d7dbe74061 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| (a) Letter 1 | | |
| (b) Letter 2 | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0ff884ca-4de8-428c-869b-7e1c6bfb6ada | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| | |
| (c) Letter 3 | | |
| (d) Letter 4 | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6281ad54-9ecf-4303-b013-a92551c7d0eb | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| (d) Letter 4 | | |
| [ | 11 | ] |
| from [9] | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
52ba54d9-4cd7-4ffe-b240-3f8d74631a56 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| ] |
| from [9] | | |
| AIDS task | The probability that someone is infected with HIV is 0.01%. The test | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b0de0739-883f-4579-9067-165c5f8cf649 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| The probability that someone is infected with HIV is 0.01%. The test | |
| recognizes HIV virus with 100% probability if it is present. So, the test is | | |
| positive. The probability of getting a positive test result when you don't | | |
| really have the virus is only 0.01%. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
6ec1ec48-7b7a-4f43-ae91-ec5826fcb919 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| really have the virus is only 0.01%. | | |
| The test result for your friend is positive. The probability that your friend is | | |
| infected with the HIV virus is therefore: ___ % | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
645c9d7e-991a-49e1-85d2-3c279bd80405 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| infected with the HIV virus is therefore: ___ % | | |
| [11] | AIDS task (fa- |
|---------------------------------------------------------------------------------|------------------|
| cilitated) | |
| This task involves an assessment of the results of the AIDS test. It is known | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d040a813-d105-4a4b-abb3-b57b6bfd588b | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
fa- |
|---------------------------------------------------------------------------------|------------------|
| cilitated) | |
| This task involves an assessment of the results of the AIDS test. It is known | |
| that HIV can cause AIDS. Now imagine the following: A friend of yours | |
| gave blood at the hospital. It will then be checked to see if HIV is present | |
| in the blood. The test result is positive. How likely is it that your friend is | |
| actually infected with the HIV? | |
| To answer this question, you will need the following information: Out of | |
| 10,000 people, 1 person is infected with HIV. If the person is infected with | |
| the HIV, the test detects HIV. So the test is positive. Only 1 of the 9,999 | |
| people who are not infected with HIV have a positive test. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c385b45c-36de-420f-ab8a-ab257b030a61 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| 10,000 people, 1 person is infected with HIV. If the person is infected with | |
| the HIV, the test detects HIV. So the test is positive. Only 1 of the 9,999 | |
| people who are not infected with HIV have a positive test. | |
| The test result for your friend is positive. How many people who have | |
| received a positive test result are actually infected with HIV? ___ from ___. | |
| Hospital prob- | |
| lem | |
| [ | 11 | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
397548c2-ac0b-4cd7-842a-a9c49b17ee78 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| [ | 11 |
| from [5, 6] | |
| In hospital A about 100 children are born per month. In hospital B about 10 | |
| children are born per month. The probability of the birth of a boy or a girl is | |
| about 50 percent each. | |
| Which of the following statements is right, which is wrong? | |
| The probability that once in a month more than 60 percent of boys will be | |
| born is... | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
86e7b205-43a3-4408-8827-b8e41e7a3200 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| Which of the following statements is right, which is wrong? | |
| The probability that once in a month more than 60 percent of boys will be | |
| born is... | |
| (a) ... larger in hospital A | |
| (b) ... larger in hospital B | |
| (c) ... equally big in both hospitals | |
| [10, 11] | Monty | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
313dba1e-2ef8-4003-aaeb-e6c9e6ddd904 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| [10, 11] | Monty |
| problem | |
| A candidate on a quiz show can choose one of three doors. Behind one of | |
| the doors is the main prize, a car. Behind the other two doors, there are two | |
| goats. The rules of the game are now as follows: The quizmaster knows | |
| behind which of the doors the car and the goats are. After the candidate has | |
| chosen one of the doors, it remains locked for the time being. | |
| The quizmaster then opens one of the other two doors. He always opens | |
| a door with a goat behind it. Imagine | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
8ade30f1-5c8e-4f57-aa16-5cb8916a3c2e | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| chosen one of the doors, it remains locked for the time being. | |
| The quizmaster then opens one of the other two doors. He always opens | |
| a door with a goat behind it. Imagine that the candidate chooses door 1. | |
| Instead of opening this door, the quizmaster opens another door, behind | |
| which there is a goat. He now offers the candidate the option of switching | |
| his choice to the last unopened door. | |
| Should the candidate switch to the door or not? | |
| [11] | Monty | Hall |
|-------------------------------------------------------------------------------|---------|--------|
| problem | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c82ccdda-4f84-4815-9fa2-e0e04b58b70d | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
] | Monty | Hall |
|-------------------------------------------------------------------------------|---------|--------|
| problem | | |
| (facilitated) | | |
| A candidate on a quiz show can choose one of three doors. Behind one of | | |
| the doors is the main prize, a car. Behind the other two doors, there are two | | |
| goats. The rules of the game are now as follows: The quizmaster knows | | |
| behind which of the doors the car and the goats are. After the candidate | | |
| has chosen one of the doors, it remains locked for the time being. The | | |
| quizmaster then opens one of the other two doors. He always opens a door | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
9fd78939-c65e-4892-9cd1-7ef7e817856f | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| behind which of the doors the car and the goats are. After the candidate | | |
| has chosen one of the doors, it remains locked for the time being. The | | |
| quizmaster then opens one of the other two doors. He always opens a door | | |
| with a goat behind it. | | |
| Imagine that the candidate chooses door 1. Instead of opening this door, the | | |
| quizmaster opens another door, behind which there is a goat. He now offers | | |
| the candidate the option of switching his choice to the last unopened door. | | |
| Should the candidate switch to the door or not? | | |
| There are only three possible car-goat constellations: | | |
| (a) Door 1: goat (first choice); door 2: goat; door 3: car. | | |
| (b) Door | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
c148ea0f-874a-4158-ad70-3f96928a5544 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
|
| There are only three possible car-goat constellations: | | |
| (a) Door 1: goat (first choice); door 2: goat; door 3: car. | | |
| (b) Door 1: goat (first choice); door 2: car; door 3: goat. | | |
| (c) Door 1: car (first choice); door 2: goat; door 3: goat. | | |
| Now think about in which constellation the quizmaster could open which | | |
| door. Then decide whether the candidate should stay in the respective | | |
| constellation with his first choice or switch to the last remaining door. | | |
| In how many of these three constellations should the candidate switch from | | |
| his first choice to the remaining unopened door? In ___ of 3 constellations. | | |
| What should the candidate therefore do? Stay or switch? | | |
| [7] | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d6bb3c3f-9187-4859-8b51-5f76309e0082 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| his first choice to the remaining unopened door? In ___ of 3 constellations. | | |
| What should the candidate therefore do? Stay or switch? | | |
| [7] | Linda | |
| problem | | |
| Linda is 31 years old, single, very intelligent, and speaks her mind openly. | | |
| She studied philosophy. During her studies, she dealt extensively with | | |
| questions of equality and social justice and participated in anti-nuclear | | |
| demonstrations. | | |
| Now order the following statements about Linda according to how likely | | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
bb8b687c-9920-46fe-a30e-9f88fc3fd1c4 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| |
| demonstrations. | | |
| Now order the following statements about Linda according to how likely | | |
| they are. | | |
| Which statement is more likely? | | |
| (a) Linda is a bank clerk. | | |
| (b) Linda is active in the feminist movement. | | |
| (c) Linda is a bank clerk and is active in the feminist movement. | | |
| [5] | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5e759673-64ca-4a03-a790-2dadc8fcea93 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
is active in the feminist movement. | | |
| (c) Linda is a bank clerk and is active in the feminist movement. | | |
| [5] | Birth | |
| sequence prob- | | |
| lem (random) | | |
| All families with six children in a city were surveyed. In seventy-two | | |
| families, the exact order of births of boys (B) and girls (G) was GBGBBG. | | |
| What is your estimate of the number of families surveyed in which the exact | | |
| order of births was BGBBBB? | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
10619c08-ab47-41f6-b42e-709ede4ba26d | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
surveyed. In seventy-two | | |
| families, the exact order of births of boys (B) and girls (G) was GBGBBG. | | |
| What is your estimate of the number of families surveyed in which the exact | | |
| order of births was BGBBBB? | | |
| [5] | Birth | |
| sequence prob- | | |
| lem (ordered) | | |
| All families with six children in a city were surveyed. In seventy-two | | |
| families, the exact order of births of boys | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
21cf0451-c40d-455a-9257-e3d6fc7509d6 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| | |
| All families with six children in a city were surveyed. In seventy-two | | |
| families, the exact order of births of boys (B) and girls (G) was GBGBBG. | | |
| What is your estimate of the number of families surveyed in which the exact | | |
| order of births was BBBGGG? | | |
| [5] | High | school |
| problem | | |
| There are two programs in a high school. Boys are a majority (65%) in | | |
| program A, and a minority (45%) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
04cf6dbd-0a70-49b7-8392-451b69f1a229 | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| | |
| There are two programs in a high school. Boys are a majority (65%) in | | |
| program A, and a minority (45%) in program B. There is an equal number | | |
| of classes in each of the two programs. | | |
| You enter a class at random, and observe that 55% of the students are boys. | | |
| What is your best guess - does the class belong to program A or to program | | |
| B? | | |
| [5] |
|-----------------------------------------------------------------------------|
| children: Alan, Ben, Carl, Dan, and Ed. Consider the following distribu- | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
cc9ca999-03da-401e-a84b-8a16757f39fd | # (Ir)Rationality And Cognitive Biases In Large Language Models
## 2 Task Description
| | |
| [5] |
|-----------------------------------------------------------------------------|
| children: Alan, Ben, Carl, Dan, and Ed. Consider the following distribu- |
| tions: |
| Type I: Alan: 4; Ben: 4; Carl: 5; Dan: 4; Ed: 3. |
| Type II: Alan: 4; Ben: 4; Carl: 4; Dan: 4; Ed: 4. |
| In many rounds of the game, will there be more results of type I or of type |
| II? | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09193v2.md",
"file_path": "paper_data/2402.09193v2.md",
"file_size": 69224,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5f9b695b-c97b-4ade-aac2-3725c7ee9a28 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
Negar Arabzadeh1∗
Julia Kiseleva2
Qingyun Wu3
Chi Wang2
Ahmed Awadallah2
Victor Dibia2
Adam Fourney2
Charles Clarke1
1Univerity of Waterloo
2Microsoft Research
3Pennsylvania State University | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
3a437efa-2369-4e6b-9528-031e83403281 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## Abstract
The rapid development in the field of Large Language Models (LLMs) has led to a surge in applications that facilitate collaboration among multiple agents to assist humans in their daily tasks. However, a significant gap remains in assessing whether LLM-powered applications genuinely enhance user experience and task execution efficiency. This highlights the pressing need for methods to verify utility of LLM- powered applications, particularly by ensuring alignment between the application's functionality and end-user needs. We introduce AgentEval1, a novel framework designed to simplify the utility verification process by automatically proposing a set of criteria tailored to the unique purpose of any given application. This allows for a comprehensive assessment, quantifying the utility of an application against the suggested criteria. We present a comprehensive analysis of the robustness AgentEval. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
21b07bc1-8998-4a97-a878-1df2e903d39b | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 1 Introduction
The rapid development of open-source libraries (Wu et al., 2023; Li et al., 2023a) that aims to simplify the development of LLM-powered agentic solutions for various user-enteric tasks has led to the rapid growth of such applications (Liang et al., 2023b; Hong et al., 2023; Talebirad and Nadiri, 2023). One of the long-lasting goals (Winograd, 1972) is the ability to seamlessly interact with humans in natural language to help end-users and to make their lives easier by assisting with their tasks from math tutoring to completing household tasks and so on. End users have expectations and requirements for a developed application that need to be met. This understanding is essential to assess the *utility* it brings and, consequently, to further improve and align the application towards end-users'
goals.
Directly evaluating agentic systems poses challenges as current approaches predominantly rely on end-to-end success metrics - essentially, whether the agent accomplishes tasks (Shridhar et al., 2020b, 2019; Myers et al., 2023). However, comprehending user interaction with an application involves much more than success alone (Kiseleva et al., 2022a,b; Zhang et al., 2023). Take math problems, for instance; it is not merely about the agent solving the problem. Equally significant is its ability to present solutions based on various criteria, including completeness, conciseness, and clarity of the explanation provided. In other words, in a code completion scenario, even an incomplete code suggestion can be useful when it provides substantial boilerplate code or proposes a framework to solve a task (Dibia et al., 2023). Furthermore, success is not always clearly defined for every task. Knowing such criteria for an LLM-powered application and being able to quantify them is essential to verify whether user requirements are being satisfied, in other words, if the application brings utility to the end-users. Given the objective of verifying arbitrary applications, reliance on a benchmarking approach is untenable due to the expansive range of tasks requiring automation. A prerequisite is a scalable and flexible methodology capable of accommodating a diverse set of applications.
In this work, we aim to introduce the AgentEval framework, a tool crafted to swiftly gauge the utility of LLM-powered agentic applications designed to help end-users accomplish their desired tasks.
The goal of AgentEval | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
94ac21b5-f172-4bcf-b160-bc3a5c8f988d | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 1 Introduction
being able to quantify them is essential to verify whether user requirements are being satisfied, in other words, if the application brings utility to the end-users. Given the objective of verifying arbitrary applications, reliance on a benchmarking approach is untenable due to the expansive range of tasks requiring automation. A prerequisite is a scalable and flexible methodology capable of accommodating a diverse set of applications.
In this work, we aim to introduce the AgentEval framework, a tool crafted to swiftly gauge the utility of LLM-powered agentic applications designed to help end-users accomplish their desired tasks.
The goal of AgentEval is to assess the current alignment between application behavior and user goals, providing application developers with insights into how and what aspects of the current flow can be improved. AgentEval takes into account recent discoveries that have shown LLMs to emerge as a scalable and cost-effective alternative to human evaluations for open-ended tasks (Li et al., 2023b).
The *AgentEval* is illustrated in Fig. 1, which consists of two main agents executed consecutively.
These agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools (Wu et al.,
2023)2:
- *CriticAgent* suggests the list of criteria based on the task description and the suggested solutions, e.g. for math problems can be *Efficiency* of the proposed solution and *Clarity* of the proposed solution;
- *QuantifierAgent* verifies how well the solution s produced by an agentic system designed for task t are performing for each criterion and returns the utility function, e.g. what is the Clarity level of the solution, not clear, moderately clear or very clear.
We believe that the usage of *AgentEval* can extend beyond the immediate verification of the current performance of LLM-powered applications.
The framework can be employed over time to uncover new capabilities of the system and potential changes in task utility for the end user. The discovered utility function can be utilized to optimize the system towards user needs or system developer requirements, and this optimization can occur over time.
In summary, our main contributions are:
C1 A definition of task utility that enables access
to the possible requirements an end-user may
2https://github.com/microsoft/autogen have regarding an LLM-powered | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
f21d5768-6439-4e81-90dc-3922a4b041da | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 1 Introduction
Eval* can extend beyond the immediate verification of the current performance of LLM-powered applications.
The framework can be employed over time to uncover new capabilities of the system and potential changes in task utility for the end user. The discovered utility function can be utilized to optimize the system towards user needs or system developer requirements, and this optimization can occur over time.
In summary, our main contributions are:
C1 A definition of task utility that enables access
to the possible requirements an end-user may
2https://github.com/microsoft/autogen have regarding an LLM-powered application and how well the application satisfies this list of criteria;
C2 An introduction of *AgentEval*, a novel framework that leverages LLM-powered agents as
a scalable and cost-effective alternative to human evaluations to produce task utility through the collaboration of two agents: CriticAgent proposes a list of criteria based on the task description and the successful and failed execution of the agent, and *QuantifierAgent* assesses how well the list of criteria is supported by the
current implementation of an application;
C3 An in-depth analysis of the *AgentEval* robustness on various tasks and datasets across different solutions that can be replicated for a newly unseen domain.
The remainder of this paper is organized as follows. Section 2 describes earlier work and background. We provide the motivation behind AgentEval and define the utility of the task in Section 3. Section 4 gives an overview of the datasets, namely MATH (Hendrycks et al., 2021b) and ALF-
World (Shridhar et al., 2020b), and the solutions for building applications powered by LLM utilized in our work.
Section 5 demonstrates our findings about applying AgentEval to assess the utility of the task for the selected datasets. Section 6 presents an in-depth analysis of the robustness of AgentEval, namely *CriticAgent* robustness (Section 6.1), *QuantifierAgent* robustness (Section 6.2)
and an automatic verification of QuantifierAgent
(Section 6.3). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
4006132b-f17c-42f4-81a1-730393ff74e9 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 2 Related Work
We build on the streams of prior work. First, we will discuss the list of benchmarks and approaches to evaluate general LLMs in general (Section 2.1). Second, we will present approaches to understand and predict user utility functions (Section 2.2). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
df2102ce-2118-44ab-aae1-810bde0966f3 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 2.1 Llm Evaluation
There exists a significant volume of literature dedicated to assessing Language Model Models
(LLMs), as evidenced by extensive research efforts (Guo et al., 2023; Ziyu et al., 2023; Chang et al., 2023; Liang et al., 2023a). LLMs have been evaluated from various aspects including, but not limited to, specialized LLMs (Jin et al., 2019), ethics and morality (Hendrycks et al., 2021a), safety and robustness (Wang et al., 2023), and knowledge and reasoning (Bian et al., 2023). Additionally, recent developments include the introduction of intricate multi-modal benchmark datasets (Mialon et al., 2023; Bang et al., 2023). Furthermore, there are attempts to evaluate LLMs as agents (Liu et al., 2023).
However, there is a lack of literature that focuses on the holistic verification of the utility of LLMs for end-users in solving their tasks, which we tackle in this work. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
eacdf972-e600-4db7-a138-27d1a32ea110 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 2.2 User Satisfaction Prediction
Recent studies suggest that users interacting with various systems operate with specific utility functions in mind (Li et al., 2021, 2020; Azzopardi et al., 2018; Ahmadvand et al., 2022). Traditionally, metrics defining user satisfaction were tailored to specific applications, such as intelligent assistants (Kiseleva et al., 2016a,b), web search engines (Williams et al., 2016a,b; Williams and Zitouni, 2017), and dialogue systems (See et al., 2019). Moreover, there is a growing trend in utilizing LLMs for qualitative research (Bano et al., 2023) and adopting LLMs as proxies for human behavior (Tjuatja et al., 2023; Liu and Sun, 2023). Notably, CoEval (Li et al., 2023b) has recently demonstrated the synergy between human evaluation and LLMs in establishing evaluation criteria and conducting multi-dimensional evaluations for openended NLG tasks.
Building on top of these works, we propose a framework capable of assessing the utility of various LLM-powered applications at scale. This framework aims to align agentic systems with human preferences. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
aa4d60d2-cf3a-461f-bb28-ceae86038332 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 3 Defining Task Utility
It is important to begin by considering the categories of tasks around which we focus LLM- powered applications. Fig. 1 outlines a taxonomy of target tasks for agentic systems in terms of success metrics. At the top level, tasks can be split into two main categories, where: - *Success is not clearly defined* - For these tasks,
users utilize a system in an assistive manner,
seeking suggestions rather than expecting the system to solve the task end-to-end. For example, a user might request the system to generate an email based on some user input. In many cases, this generated content serves as a template that the user will later edit. However, defining success precisely for such tasks is less well-defined. In case of online evaluation, while being expensive, we can ask users to what extend the the assistance was helpful. Although quantifying to
what extent the help was useful is still challenging on its own, the problem becomes increasingly challenging when it comes to offline evaluation, or evaluation in novel scenarios before we have users.
- *Success is clearly defined* - For these tasks, we
can clearly determine whether a system solved the task or not. Consider agents that assist in accomplishing household tasks, where the definition of success is clear and measurable.
This second category can be further divided into
two subcategories: - Success is clearly defined and an optimal solution exits - For these tasks, only one solution
is possible. For example, if you ask your assistant to turn on the light, the success of this task is clearly defined, and there is only one way to accomplish it.
- Success is clearly defined and multiple solutions exist - increasingly, we observe situations
where multiple trajectories of agent behavior can
lead to either success or failure. In such cases, it is crucial to differentiate between the various successful and unsuccessful outcomes. For example,
when you ask the agent to suggest a food recipe
or tell you a joke, you may define success as the food tasting good or the joke being funny, but perhaps the recipe should not be too expensive to prepare, and the joke should not be offensive.
In our AgentEval framework, we are currently focused on tasks where success is clearly defined and multiple successful solutions may exist.
Our previous research on assistive agents suggested that the most optimal way to obtain human judgments is to present humans with two agents side by side and ask for preferences (Kiseleva et al., 2022b). In this setup of pairwise comparison | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d6ad2684-ad95-4c87-9cf0-12d960a73f6b | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 3 Defining Task Utility
when you ask the agent to suggest a food recipe
or tell you a joke, you may define success as the food tasting good or the joke being funny, but perhaps the recipe should not be too expensive to prepare, and the joke should not be offensive.
In our AgentEval framework, we are currently focused on tasks where success is clearly defined and multiple successful solutions may exist.
Our previous research on assistive agents suggested that the most optimal way to obtain human judgments is to present humans with two agents side by side and ask for preferences (Kiseleva et al., 2022b). In this setup of pairwise comparison, humans can develop a list criteria to explain why they prefer the behavior of one agent over another. For instance, 'the first agent was faster in execution' or 'the second agent moves more naturally'.
Therefore, the comparative nature guided humans
to come up with a list of criteria that helps to infer the utility of the task. With this idea in mind,
we designed AgentEval (shown in Fig. 1), where we employ LLMs to help us understand, verify,
and assess task utility for the multi-agent system.
The AgentEval Framework employs two types of
agents, namely: - The goal of a *CriticAgent* is to suggest a list of
criteria that can be used to assess task utility for end users. The critic is given a task description as well as a list of few successful and failed examples of the task execution; then it is able to
return a list of criteria: C = {c1*, . . . , c*n} where
each criterion ci is accompanied with a set of
accepted values ω as ci : {ωj}m
j=1. For example, The *CriticAgent* generated the criteria such as Clarity, Efficiency, and more as described in the Tab. 1 for solving math problems. Each of the Criterion would be accompanied with a set
| Criteria | Description |
|------------------------------------------|--------------------------------------------|
| Accepted Values | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
caa6f10a-1448-419e-8624-f74ba75a1526 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 3 Defining Task Utility
, and more as described in the Tab. 1 for solving math problems. Each of the Criterion would be accompanied with a set
| Criteria | Description |
|------------------------------------------|--------------------------------------------|
| Accepted Values | |
| - Not Clear (0) | |
| - Moderately Clear (1) | |
| - Very Clear (2) | |
| Clarity | The ease of understanding the steps, |
| explanations, and language used in the | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fa71f921-64a0-4a74-907c-86faf3d74b3c | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 3 Defining Task Utility
|
| Clarity | The ease of understanding the steps, |
| explanations, and language used in the | |
| solution. | |
| Efficiency | The use of optimal methods or |
| approaches to solve the math problem. | |
| - Inefficient (0) | |
| - Moderately Efficient (1) | |
| - Efficient (2) | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b2ef0b46-d204-4b0a-91d3-1d68b1526c2d | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 3 Defining Task Utility
|
| - Moderately Efficient (1) | |
| - Efficient (2) | |
| - Not Addressed (0) | |
| - Partially Addressed (1) | |
| - Well Addressed (2) | |
| Error | |
| Analysis | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ad88d6af-f603-4393-89a3-99b9b2398340 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 3 Defining Task Utility
| |
| Analysis | |
| The identification and description of | |
| possible errors or misconceptions in the | |
| math problem-solving process. | |
| Completeness | Quality of code in terms of efficiency and |
| elegance | |
| - Incomplete (0) | | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
ba48d1dd-b8f8-4722-8635-172481b90e7d | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 3 Defining Task Utility
| |
| - Incomplete (0) | |
| - Mostly Complete (1) | |
| - Complete (2) | |
of accepted values as shown in this Table as an example.
- The goal of *QuantifierAgent* is to quantify each
of the suggested criteria to access the task utility Ut for the end user in the following form:
Ut(s) = {Qi(s|ci)}n
i=1, providing us with an
idea of the utility of this system for the given task for the end user. In which s represent the
task sample and Q(s|ci.) is the quantifier output
for sample s based on the criterion ci. For example, for one sample of math problem solving, and given the generated criteria shown in Tab. 1 the solution's Accuracy could be quantified as
"Incorrect", "partially correct" or "correct". Eligible quantified values for quantification process are shown in "Accepted values" column in Tab. 1
Next we will discuss the datasets and baselines we use to test the work of AgentEval. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
0c8b6c03-9a35-465e-a9a5-1815ca6c65ee | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 4 Datasets And Solutions
This section provides an overview of the datasets utilized in our study. Our selection encompasses a variety of datasets, from those based on realworld problems to their simulations and beyond.
The Mathematics Dataset (Section 4.1) is chosen for its widespread usage and comprehensive understanding in the field. It represents complex problem-solving scenarios that are fundamental in evaluating the effectiveness of multi-agent systems.
AlfWorld (Section 4.2) offers a scenario involving multi-turn interactions within a moderately approximated multi-modal environment. This dataset is instrumental in assessing agents' performance in interactive and dynamic settings.
Each dataset plays a critical role in evaluating different aspects of AgentEval's capabilities, from handling complex theoretical problems to navigating real-world scenarios. In both tasks, although success is clearly defined, multiple solutions exist for accomplishing the objectives. For example, when solving a math problem, there are various approaches one can take. Similarly, in the Alfworld dataset, which involves household tasks, there are multiple ways to complete them based on how you search for objects and the thinking strategies you employ, among other factors. An example of Math problem solving and AlfWorld task are shown in Appendix A.1. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b2d0a54b-7a5a-4b04-b199-8509df3db22b | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 4.1 Math Problem Solving
The MATH dataset, originally is a substantial collection of 12,500 challenging mathematics problems from high school competitions (Hendrycks et al., 2021b). Each problem comes with a stepby-step solution, enabling models to learn how to generate both derivations and explanations. The dataset covers a wide range of mathematical subjects and is tagged by difficulty levels, offering a nuanced measure of model performance across various aspects of mathematical problem-solving.
This dataset is particularly suitable for testing multi-agent systems for several reason including:
(i) The problems in the MATH dataset are not simple computations but require a deep understanding of mathematical concepts, heuristics, and problem– solving strategies. (ii) Since the dataset includes step-by-step solutions, it allows for the assessment of an agent's ability to learn and reason through a problem, not just its ability to arrive at the correct answer. (iii) The variety of subjects and difficulty levels in the MATH dataset enables a comprehensive evaluation of a system's versatility and adaptability in different mathematical domains which is crucial for multi-agent systems that are expected to operate across a range of scenarios.
Similar to the math problem experimental setup as in Wu et al. (2023), we carry out two experimental evaluations which involves 120 problems from level-5, the most challenging category, and includes 20 problems each from six different categories, of number theory, counting and probability, prealgebra, algebra, intermediate algebra, and precalculus.
Solutions: In establishing a solution for this task, we draw inspiration from the experiments showcased in (Wu et al., 2023). We evaluate the proposed methodology by AutoGen (Wu et al.,
2023), as well as Langchain ReAct 3 and a Vanilla solver that employs gpt-4 to tackle the html task. These solutions approaches have previously demonstrated promising performance in solving mathematical problems, particularly on the dataset at hand (Wu et al., 2023). We assess and compare the performance of these three solutions using AgentEval. Fig. 9 displays an example of one math problem from prealgebra category as well as the solution created by AutoGen. In Section 5.1, we delve into how AgentEval would perform on math problem solving tasks and how the measured performance with AgentEval correlates with the ground truths. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7fe001b2-e771-4fbf-b665-c47ea347c299 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 4.2 Alfworld Household Task
ALFWorld, presents a set of language-based interactive decision-making tasks within simulated household environments (Shridhar et al., 2020b).
This benchmark is distinguished by its diversity of tasks, offering a comprehensive platform for testing AI and multi-agent systems.
This benchmark is particularly suited for such evaluations because first, ALFWorld is the first interactive parallel environment that aligns text descriptions and commands with physically embodied robotic simulation. It extends two prior works:
TextWorld, an engine for interactive text-based games, and ALFRED, a large-scale dataset for vision-language instruction following in embodied environments (Shridhar et al., 2020a; Côté et al., 2019). The cross-modality framework of this benchmark allows for a variety of embodied tasks with corresponding text-based counterparts, allowing agents to be trained and evaluated in both the language and embodied world. In addition, ALFWorld supports the development of agents that can reason both abstractly and execute actions concretely, mimicking human-like decision-making processes in varying contexts. Finally, the dataset's inclusion of a wide range of tasks, from household chores to more intricate problem-solving scenarios, provides a comprehensive testbed for evaluating the adaptability and problem-solving capabilities of AI and multi-agent systems. In general, the dataset allows agents to explore, interact, and learn in an abstract language environment before dealing with the complexities of an embodied environment.
Solutions: As for the solutions to solve ALF-
World Household tasks, similar to (Wu et al., 2023), we consider ReAct (Yao et al., 2022) as well as AutoGen with two agents and AutoGen with three agents (Wu et al., 2023). ReAct is an agent that operates within the ALFWorld environments and is responsible for suggesting plans and executing actions. On the other hand, AutoGen Two-Agent System consists of an LLM-backed assistant agent responsible for suggesting plans, and an executor agent responsible for executing actions in the ALFWorld environments. Both ReAct and this solution occasionally struggles with leveraging basic commonsense knowledge about the physical world, which can lead to repetitive errors and getting stuck in loops. In AutoGen with three agents, a grounding agent is provided just for the sake of critical common sense knowledge whenever the system exhibits early signs of recurring errors. We assess and compare the performance of these three solutions using AgentEval. Fig. 10 displays | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5a3e2383-f6c2-444c-a5d8-81403e80e69e | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 4.2 Alfworld Household Task
suggesting plans and executing actions. On the other hand, AutoGen Two-Agent System consists of an LLM-backed assistant agent responsible for suggesting plans, and an executor agent responsible for executing actions in the ALFWorld environments. Both ReAct and this solution occasionally struggles with leveraging basic commonsense knowledge about the physical world, which can lead to repetitive errors and getting stuck in loops. In AutoGen with three agents, a grounding agent is provided just for the sake of critical common sense knowledge whenever the system exhibits early signs of recurring errors. We assess and compare the performance of these three solutions using AgentEval. Fig. 10 displays a portion of an example of AlfWorld Householding task solved by AutoGen. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
e65edc31-9c13-4990-8aee-8af3df71a760 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5 Agenteval Workflow
This Section outlines the workflow of the AgentE-
val illustrated in Fig. 1. Next, we will demonstrate how AgentEval works based on 3 different datasets: Math Problems (Sec. 4.1) and AlfWorld (Sec. 4.2). | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
b181a2b3-e5df-4267-b7e4-ef6c575adef6 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.1 Agenteval For Math Problems
Critic and Quantifier Findings After executing the CriticAgent, we have obtained a set of criteria for validating the results of the mathematical problem presented in Table 1. Subsequently, the QuantifierAgent is tasked with quantifying each criterion based on accepted values. In Figure 3 (a), we present the outcome of *QuantifierAgent* i.e., the measured performance of three solutions on this task. This visual representation of the AgentEval output reveals some intriguing insights. Notably, it is evident that Agenteval do not quantify the three solutions as if they are equally performing well across different criteria. For instance, while all three solutions leverage GPT-4 as the underlying language model, Autogen outperforms ReAct and Vanilla GPT-4 in terms of accuracy. This observation extends to solution completeness and efficiency as well. Conversely, when considering the criterion of Clarity, all three approaches exhibit more competitive performance.
As depicted in this figure, the error analysis range of quantified values differs from other metrics. To gain a better understanding of this criterion, we further scrutinize the results by categorizing them into successful and failed cases, as illustrated in Fig. 3 (b). While AutoGen, Vanilla Solver and ReAct solution are each presented in orange, blue and green respectively, the darker bars represent the performance on successful cases and lighter bars represent the failed cases performance. The difference between the dark and light bar of each color, verify the AgentEval performance as we expect that each positive criteria should be quantifier higher for successful cases compared to their failed cases. We observe that in most cases, the successful and failed cases are distinguished even with 95% interval confidence on all the success and failed cases.
We delve further into the differences between successful cases among the three solutions and
| Criteria |
|----------------------------------------------|
| Accepted Values |
| Task Under- |
| standing | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
7d3d08e7-1ee3-409c-97be-720b45d8d139 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.1 Agenteval For Math Problems
|
|----------------------------------------------|
| Accepted Values |
| Task Under- |
| standing |
| How well the participant was able to |
| comprehend the problem set and follow |
| the task instructions |
| - Excellent (4) |
| - Good (3) |
| - Average (2) |
| - Poor (1) |
| - Terrible (0) |
| Plan |
| Making |
| The ability of the participant to strategize |
| and make a plan for | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
44361c7e-d009-477e-9742-56ce1ff73924 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.1 Agenteval For Math Problems
|
| Plan |
| Making |
| The ability of the participant to strategize |
| and make a plan for tackling the task. |
| - Excellent (4) |
| - Good (3) |
| - Average (2) |
| - Poor (1) |
| - Terrible (0) |
| Action |
| Decision |
| The participant's decision-making skills |
| in choosing the right action to perform. |
| - Excellent (4) |
| - Good (3) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
5310ab8b-e5d7-4619-89e1-f2b116f84edc | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.1 Agenteval For Math Problems
|
| The participant's decision-making skills |
| in choosing the right action to perform. |
| - Excellent (4) |
| - Good (3) |
| - Average (2) |
| - Poor (1) |
| - Terrible (0) |
| Action |
| Execution |
| How effectively the participant is able to |
| execute the chosen action. |
| - Excellent (4) |
| - Good (3) |
| - Average (2) |
| - Poor (1) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
36af06ea-ab96-4c50-9d94-9abaf5e3514a | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.1 Agenteval For Math Problems
| - Excellent (4) |
| - Good (3) |
| - Average (2) |
| - Poor (1) |
| - Terrible (0) |
| Response to |
| Feedback |
| How well the participant adapts his/her |
| next steps based on the feedback from the |
| environment |
| - Excellent (4) |
| - Good (3) |
| - Average (2) |
| - Poor (1) |
| - Terrible (0) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
658840ea-ff4c-4181-8fc2-46db33cb580f | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.1 Agenteval For Math Problems
Good (3) |
| - Average (2) |
| - Poor (1) |
| - Terrible (0) |
| - Correct (1) |
| - Incorrect (0) |
| Correctness |
| of Action |
| The correctness of the action performed |
| by the participant with respect to the |
| available actions and the current context |
| Use of |
| Terminate |
| Whether the participant uses the |
| 'TERMINATE' command appropriately |
| - Appropriate (1) | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
fb3baee6-329d-412b-934a-7f7955523be7 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.1 Agenteval For Math Problems
|
| Terminate |
| Whether the participant uses the |
| 'TERMINATE' command appropriately |
| - Appropriate (1) |
| - Inappropriate (0) |
failed cases among the solutions. One interesting observation from this Figure is that not all successful cases are identical, and similarly, not all failed cases are the same. The difference between successful cases among the three solutions is smaller than the differences between their failed cases. For instance, Autogen's failed cases exhibit higher efficiency and completeness compared to the Vanilla gpt-4 solver. This observation provides us with valuable additional insights. | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |
d9420e5e-810c-466e-92c0-11f04c9eb565 | # Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications
## 5.2 Agenteval For Alfworld
Critic and Quantifier Finding In this section, we provide an example of AgentEval applied to the AlfWorld Householding task, as mentioned in Sec. 5.1 in which real-world household environments is emulated through textual interfaces
(Shridhar et al., 2020b). When running the CriticAgent on this task, it identified specific criteria such as "Task understanding", "Plan making" and
"Response to Feedback" as outlined in Tab. 2. We consulted researchers deeply involved with these tasks, and their expertise confirmed that these criteria are critically relevant and significant similar to (Li et al., 2023b). For example, given that these tasks are language-based and require interactive decision-making, an agent in ALFWorld is tasked with high-level objectives, such as placing a hot apple in the fridge, and must navigate and interact with a simulated household environment to achieve these objectives. Therefore, criteria displayed in Tab. 2 satisfy the assessment of this task. While the criteria are pretty self-descriptive, about the criterion "Use of TERMINATE" We note that the agent is prompted to use the term "TERMINATE"
upon task completion, which is closely correlated with task success.
Following the extraction of a set of criteria as detailed in Tab 2, these criteria are passed to the QuantifierAgent for quantification on each sample. Figure 4 presents the results for three introduced solutions: AutoGen with 2 agents, AutoGen with 3 agents, and ReAct, on the 134-test set from (Wu et al., 2023). On the left side of Fig. 4, a Spider Figure illustrates the performance of these three solutions across all criteria. It is important to note that all criteria, except "Use of TERMINATE" and
"Correctness of Action" employ a five-level grading system, while these two criteria are binary. From this figure, it is evident that ReACT performs notably worse across all criteria, while AutoGen with 2 agents and 3 agents demonstrate competitive performance. Notably, AutoGen with an additional common-sense grounding agent slightly outperforms others, particularly in the areas of Response to Feedback and Action Execution. Additionally, the barplot on the right side of Fig. 4 categorizes the 134 games into two groups: failed and successful, displaying the quantifier performance for each subgroup. Similar to Fig. 3, darker colors represent performance in successful cases for each solution, while lighter colors represent performance in failed cases. AutoGen | {
"creation_datetime": "2024-03-04",
"file_name": "2402.09015v2.md",
"file_path": "paper_data/2402.09015v2.md",
"file_size": 63681,
"file_type": null,
"last_accessed_datetime": "2024-03-04",
"last_modified_datetime": "2024-02-22"
} |