doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
fcae47c6-5f5d-4e1d-80e6-22f849da3675
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 5.2 Agenteval For Alfworld two criteria are binary. From this figure, it is evident that ReACT performs notably worse across all criteria, while AutoGen with 2 agents and 3 agents demonstrate competitive performance. Notably, AutoGen with an additional common-sense grounding agent slightly outperforms others, particularly in the areas of Response to Feedback and Action Execution. Additionally, the barplot on the right side of Fig. 4 categorizes the 134 games into two groups: failed and successful, displaying the quantifier performance for each subgroup. Similar to Fig. 3, darker colors represent performance in successful cases for each solution, while lighter colors represent performance in failed cases. AutoGen 3-agent, AutoGen 2-agent, and ReAct are represented by blue, green, and orange, respectively. For most criteria, the distinction between failed and successful cases is clear, even within a 95% confidence interval. However, for certain criteria, such as "Task understanding" all solutions, whether they failed or succeeded, exhibit very similar performance. This could be interpreted as either (1) all solutions have a good understanding of the task, even if they fail to complete it, (2) this criterion may be redundant, as it does not provide additional information among these three solutions or (3) the *QuantifierAgent* is unable to score the criterion in a meaningful way. We refrain from concluding which criteria are most suitable for this specific task. Instead, we emphasize the importance of conducting a more in-depth analysis of performance beyond success rates, tailored to one's goals and application requirements.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3acf8e3f-a4f3-454e-b1c4-ba4c86c182ba
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 6 Agenteval Robustness Analysis And In-Depth Discussion This section presents the results of the analysis of how robust AgenEval is. First, we inspect if the list of criteria can be solely extracted from the task description (task-based criteria), and how the list of criteria can be changed by adding failed and successful samples from the data. Where we played with varies sample size to check its effect of the final list of criteria (Section 6.1). Second, we focus on how can we estimate the robustness of the QuantifierAgent (Section 6.2). We note that all the experiments reported in the paper are conducted with the temperature set at 0. Next, we will present our analysis using the MATH Problems dataset.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
88462027-2962-4ba6-83de-da345b6bd097
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 6.1 Task-Based Vs Solution-Based Criteria General Hypothesis We execute the CriticAgent using two distinct methods. The first method involves the Agent generating criteria solely based on the provided task description, which we refer to as "task-based" criteria. On the other hand, the CriticAgent could potentially derives criteria not only from a task description but also from examples of task solutions so called as "solution-based" criteria. In this context, our objective is to examine whether this approach leads to variations in the criteria formulated by agents. We believe this investigation is important to have a more clear vision of what criteria necessitate for having a promising assessment. A solution to a mathematical problem, might probably satisfy criteria such as accuracy and clarity in any case, independent of what the solution is. However, when additional tools are being utilized to solve the problems, such as coding to solve math problems, additional criteria like 'Code Efficiency' may be introduced to the set of criteria. If one never considered solving the problem with a specific solution method like coding, they might not initially include such criterion. In summary, depending on whether the *CriticAgent* receives only a task description or both a task description and examples of solutions, we classify the criteria as either "task-based" or "solution-based". Additionally, it is important to analyze whether the solution-based criteria overlap across different solutions and to what extent different solutions share these criteria. To compare the differences between task-based and solution-based criteria, Fig. 5 displays the number of unique criteria extracted for mathematical problem solving in task-based mode and three different solution-based approaches i.e., when the solutions come from AutoGen, ReAct and Vanilla Solver. To keep the balance between computational costs and analyzing the robustness, we conducted 50 runs of the CriticAgent with different seeds. Subsequently, for N = 50 iterations, we randomly selected M ∈ [1, 50] samples (M is shown on the x-axis of Fig. 5) and present the average number of unique extracted criteria along with its 95% confidence interval after repeating this process 50 times. We note that because we obtained results from the CriticAgent in 50 iterations in total, the confidence intervals become smaller when M get closer to the maximum number of samples i.e., 50. - Problem Difficulty: The complexity of the math problem that has been solved. - Problem Complexity: The level of difficulty of the problem. - Innovativeness:
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e983755f-ec59-4674-89af-998bb4005490
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 6.1 Task-Based Vs Solution-Based Criteria , we randomly selected M ∈ [1, 50] samples (M is shown on the x-axis of Fig. 5) and present the average number of unique extracted criteria along with its 95% confidence interval after repeating this process 50 times. We note that because we obtained results from the CriticAgent in 50 iterations in total, the confidence intervals become smaller when M get closer to the maximum number of samples i.e., 50. - Problem Difficulty: The complexity of the math problem that has been solved. - Problem Complexity: The level of difficulty of the problem. - Innovativeness: The novelty and creativity in the approach to solve the problem - Innovation: The ability to solve a problem using a unique or creative method not commonly known. - Time Taken: The time taken to solve the problem. - Time to Completion: The amount of time taken to solve the problem completely - Understandability: The clarity and ease of comprehension of the solution provided. - Readability: How easy it is to comprehend the provided solution. When examining the criteria, we have identified instances where certain criteria are quite similar but are expressed differently. These are essentially metrics that convey the same concept but are phrased with slight variations. In Table 3, we provide examples of such similarities along with their descriptions. In order to gain a deeper insight into the results presented in Figure 5, we suggest consolidating these closely related criteria to determine the total number of unique criteria once again. This approach serves two purposes: 1. It enhances our understanding of the actual number of unique criteria that have been extracted. 2. It allows us to assess whether the repetitiveness and redundancy of criteria differ between solution-based and taskbased criteria. By doing so, we can gain a better grasp of the data and draw more meaningful conclusions from our analysis. In order to consolidate similar criteria, we draw inspiration from previous work (Liu et al., 2022; Vahtola et al., 2022; Reimers and Gurevych, 2019) which demonstrated that utilizing pre-trained language models fine-tuned for paraphrasing and semantic similarity can yield high performance in numerous downstream NLP tasks. Additionally, we employ a fine-tuned pre-trained language model specifically designed for paraphrasing, known as the Hugging Face Paraphrase MiniLM 4. Our approach begins by encoding each criterion's title and its description, followed by measuring pairwise similarity between all
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b0b9763f-b9a9-45fe-95bb-87fd25057169
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 6.1 Task-Based Vs Solution-Based Criteria consolidate similar criteria, we draw inspiration from previous work (Liu et al., 2022; Vahtola et al., 2022; Reimers and Gurevych, 2019) which demonstrated that utilizing pre-trained language models fine-tuned for paraphrasing and semantic similarity can yield high performance in numerous downstream NLP tasks. Additionally, we employ a fine-tuned pre-trained language model specifically designed for paraphrasing, known as the Hugging Face Paraphrase MiniLM 4. Our approach begins by encoding each criterion's title and its description, followed by measuring pairwise similarity between all available criteria within our experiments. Subsequently, by employing a specified threshold value denoted as τ, we classify pairs with higher cosine similarity between the embedded representations of each criterion pair as one and select one of them as the representative for that pair. This strategy is commonly employed in various NLP downstream tasks. In Fig. 5, we illustrate the outcomes of the number of unique extracted criteria using different threshold values, namely 0.7, 0.85, and 1. A threshold of 1 implies that no criteria are filtered out. Summary In this section, we delved into various inputs and methods for extracting criteria. Our exploration compared the outcomes of task-based criteria, derived solely from task descriptions, with those of solution-based criteria, where the CriticA- gent is exposed to both examples of solutions and the task description. We observed that solutionbased methods produce a greater diversity of criteria compared to task-based methods. Furthermore, the diversity in the unique number of criteria varied even within solution-based methods, influenced by the model's level of creativity. Additionally, we noticed a tendency for certain criteria to recur when running the *CriticAgent* multiple times. To address this, we suggest implementing consolidation techniques, such as merging synonymous terms, to eliminate redundant criteria."
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a2aeb479-6eda-4d79-827e-8dc2158f6d0d
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 6.2 Quantifier Agent Robustness General Hypothesis Here, we aim to investigate the robustness of the *QuantifierAgent* when applied repeatedly to the same set of criteria. Our goal is to assess the consistency of the results when quantifying the same set of criteria multiple times. This is of utmost importance as we expect the behavior of the quantifier to be stable and relatively free from noise when provided with a single sample and a fixed set of criteria. This stability is crucial for us to have confidence in the results. Additionally, this analysis can help us identify and filter out criteria that may not be sufficiently stable for reliable use. To achieve this, we selected a specific subset of criteria related to mathematical problems, as detailed in Table 1, and conducted 50 runs of the quantifier agent on the 120 problems described in Section 4.1. Our expectation is to observe consistent quantified performance for each of the criteria. In Fig. 6, we present the distribution of quantified performance across 50 runs for both successful and failed cases, focusing on the five selected criteria. A consistently horizontal performance trend indicates greater robustness in the quantifier, whereas more fluctuations in the figure suggest less robustness and a noisier performance of the agent. As shown in the results, for four out of the five generated criteria, we consistently observe steady performance. Not only do the success cases consistently outperform the failed cases, but their performance also falls within a similar range across runs. However, when it comes to the "error analysis" criterion, we observe a more variable performance of the quantifier. It does not consistently predict one group (success or failed) to perform better than the other, and the quantifier's performance varies across different runs. This suggests that the AgentEval tool may not exhibit promising robustness for this particular criterion. The underlying issues could be either the criterion itself lacks clarity and appropriateness for the task, or the QuantifierA- gent struggles to quantify this criterion effectively. In either case, it is advisable to either modify or eliminate this criterion to enhance trustworthiness and reliability. Furthermore, we present the distribution of quantified values in Fig. 7 using box plots, illustrating the distribution of quantifier values for both failed (dark blue) and successful cases (light blue) across all criteria. The box plots display the first and third quartiles of the distribution as well as the median. In this figure, robust criteria should exhibit a narrower range of quantifier performance (narrower box plots
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bf6d6301-105f-4a37-933c-7b2927174922
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 6.2 Quantifier Agent Robustness eness for the task, or the QuantifierA- gent struggles to quantify this criterion effectively. In either case, it is advisable to either modify or eliminate this criterion to enhance trustworthiness and reliability. Furthermore, we present the distribution of quantified values in Fig. 7 using box plots, illustrating the distribution of quantifier values for both failed (dark blue) and successful cases (light blue) across all criteria. The box plots display the first and third quartiles of the distribution as well as the median. In this figure, robust criteria should exhibit a narrower range of quantifier performance (narrower box plots), and it should be easy to distinguish between the dark and light box plots for each criterion. Consistently with our previous observations, all four criteria, except "error analysis" allow for easy differentiation between successful and failed cases. Additionally, some criteria prove to be more robust compared to others. For example, accuracy displays a narrower range of distribution, while clarity in failed cases covers a wider range. We believe that such an analysis of the quantifier agent's performance will yield valuable insights for enhancing reliability, trustworthiness, and explainability in performance evaluation. Summary We recognize the importance of thoroughly investigating the robustness of each criterion in quantification studies. This analysis is crucial as it sheds light on the stability of each criterion. Moreover, when ground truths are available, such as in cases of success versus failure, they provide a benchmark to validate our assessments. Additionally, it's important to acknowledge that not all criteria exhibit the same level of robustness. This variability demands careful consideration during evaluations, especially given the non-deterministic nature of LLMs. Such awareness is essential to ensure the reliability and accuracy of our assessments in the dynamic field of LLMs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
30d6b192-1e75-49ab-bb79-de0b572d5b4b
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 6.3 Quantifieragent Verification To assess the accuracy of quantifying each criterion, it is essential to verify the quantification process. Ideally, we would like to validate this process by comparing it with known pairwise samples, where we have definitive knowledge that for a given criterion C, sample A is superior compared to sample B. The correct quantification should align with this knowledge. However, as the use of LLM-powered applications continues to expand daily, obtaining annotated data for many tasks is often impractical, if not impossible. Therefore, we propose employing synthetically altered versions of the samples to obtain the knowledge required for this verification. Let us assume that we have an alternative disturbed version of sample A, which is called A′. Assuming that the original sample A outperforms the one with injected noise A′, we anticipate that the criteria that assess sample quality will assign higher values to the original sample compared to the noisier variant in the same case. To carry out this validation, we conducted experiments involving mathematical problems. We introduce random noise into the solutions by removing a certain percentage of the solution sentences from Autogen's results for the math problem solving dataset. For criteria such as "completeness" or "clarity", we expect to observe greater completeness or clarity in the original solution as opposed to the one missing a portion of the solution. In our study, our goal is to assess the QuantifierAgent's ability to capture these distinctions between a known better solution and a worse one. We generated disturbed versions of solutions by randomly removing 25% of the sentences and running the quantifier over the noisy solutions. The results of these experiments are presented in Fig. 8. As depicted in this figure, the criteria that captures the quality of the solutions such as "clarity" and "completeness" of the disturbed solutions decreased compared to the original ones. This observation helps establish confidence in the performance of QuantifierAgent.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
62daa596-e75b-4d5b-9442-08f2bdfda72f
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## 7 Conclusions And Future Work The rapid development of open-source libraries aiming to simplify the creation of Language Model Models (LLM)-powered agentic solutions for various user-centric tasks has facilitated the rapid growth of such applications. However, meeting end-users' expectations and requirements for these applications is paramount, underscoring the importance of assessing the utility they provide. Directly evaluating agentic systems presents challenges, as current approaches often rely on end-to-end success metrics alone. However, understanding user interaction with an application entails more than just task success. Given the diverse range of tasks requiring automation, a scalable and flexible methodology is essential for evaluating these applications effectively. In this work, we introduce the AgentEval framework, designed to swiftly gauge the utility of LLM- powered agentic applications for end-users. AgentEval aims to assess the alignment between application behavior and user goals, providing developers with insights into areas for improvement. The framework leverages recent findings suggesting LLMs as a scalable and cost-effective alternative to human evaluations for open-ended tasks. AgentEval consists of two agents: *CriticAgent* suggests criteria based on task descriptions and suggested solutions, while *QuantifierAgent* verifies how well the solutions align with these criteria. This framework is customizable, adaptable, and can operate in various modes, employing combinations of LLMs, human inputs, and tools. We believe that AgentEval's utility extends beyond immediate performance verification. It can uncover new system capabilities over time and adapt to changes in user needs or developer requirements. In summary, our contributions include defining task utility, introducing the AgentEval framework, and conducting a robust analysis of its performance across various datasets and solutions. AgentEval represents a significant step towards evaluating and optimizing LLM-powered applications to better serve end-users.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d8c31b18-b36b-4079-bb82-1b19bc5fe212
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## A Appendix A.1 Task Examples In Figure 9 and 10, we display an example of Math problems and AlfWorld Householding tasks with AutoGen's solution for these problems. "problem": "How many positive and negative integers is 12 a multiple of?" "level": "Level 5" "type": "Prealgebra" "response_with_ans": "messages": [ { "content": "How many positive and negative integers is 12 a multiple of?", "role": "user" }, { "content": "To determine how many positive and negative integers 12 is a multiple of, we need to find the divisors of 12. Then we will double the number of divisors, as each positive divisor has a corresponding negative divisor. Let's use a Python code snippet to find all the divisors of 12 and count them.", 1 num = 12 2 divisors = [i for i **in range**(1, num + 1) if num \% i == 0] 3 positive_divisors_count = **len**(divisors) 4 negative_divisors_count = positive_divisors_count 5 total_divisors_count = positive_divisors_count + �→ negative_divisors_count 6 **print**(f"Positive divisors of 12: {divisors}") 7 divisors_negative = [-x **for** x in divisors] 8 **print**(f"Negative divisors of 12: {divisors_negative}") 9 **print**(f"Total divisors count: {total_divisors_count}") Execute this code snippet to find the divisors of 12 and count how many positive and negative integers 12 is a multiple of. "role": "assistant" } ], "time": 17.360238790512085 �→ �→ �→ �→ { "content": "Perform actions and interact with a household to solve a task. At the beginning of your interactions, you will be given the detailed description of the current environment and your goal to accomplish. For each of your turn, you should choose from two actions: "think" or "Action". If you choose "think", you should first think about the current condition and plan for your future actions, and then output
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8151983b-f7b9-4083-9d17-f9d3ed3f8d31
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## A Appendix A.1 Task Examples 12 is a multiple of. "role": "assistant" } ], "time": 17.360238790512085 �→ �→ �→ �→ { "content": "Perform actions and interact with a household to solve a task. At the beginning of your interactions, you will be given the detailed description of the current environment and your goal to accomplish. For each of your turn, you should choose from two actions: "think" or "Action". If you choose "think", you should first think about the current condition and plan for your future actions, and then output your action in this turn. Your output must strictly follow this format:"think: your thoughts." �→ �→ "Action: your next action\\n"; If you choose "ACTION", you should directly output the action in this turn. Your output must strictly follow this format:"ACTION: your next action". �→ After each of your turn, the environment will give you immediate feedback based on which you should plan your next few steps. if the envrionment output "Nothing happened", that means the previous action is invalid and you should try more options. �→ �→ �→ �→ �→ �→ �→ Reminder: 1. the action must be chosen from the given available actions. Any actions except provided available actions will be regarded as illegal. �→ 2. Take the target household immediatly after you find it. 3. Reply 'TERMINATE' only under two circumstances: a). The task has given you clear instructions to return 'TERMINATE' b). The task seems insolvable.\\n Here are two examples.\nYou are in the middle of a room. Looking quickly around you, you see a cabinet 13, a cabinet 12, a cabinet 11, a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 1, a diningtable 1, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a shelf 3, a shelf 2, a shelf 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1.\
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b6d0c5f7-07af-4666-bdf0-ad663f8b8fc1
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## A Appendix A.1 Task Examples 12, a cabinet 11, a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 1, a diningtable 1, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a shelf 3, a shelf 2, a shelf 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1.\nYour task is to: put a hot apple in fridge.\n", �→ "role": "user"}, �→ �→ { "content": "THOUGHT: The task is to put a hot apple in fridge. So i need to first find an apple and heat it. First I need to find an apple. An apple is more likely to appear in diningtable (1), coffeetable (1), fridge (1), drawer (1), cabinet (1-13), garbagecan (1). I can check one by one, starting with fridge 1.\n ACTION: go to diningtable 1\n", �→ "role": "assistant"}, �→ { "content": "On the diningtable 1, you see a apple 1, a bread 1, a fork 3, a fork 2, a fork 1, a mug 2, a peppershaker 3, a plate 2, a pot 1, a soapbottle 3, a spatula 1, a spoon 3, a spoon 2, a spoon 1, a winebottle 3, and a winebottle 2.\n", �→ "role": "user"}, { "content": "ACTION: take apple 1 from diningtable 1\n", "role": "assistant"}, { "content": "You pick up the apple 1 from the diningtable 1.\n", "role": "user"}, { "content": "THOUGHT: I noticed that there is a microwave 1 in the room, i will use it to heat the apple.\n ACTION: go to microwave 1\n", �→ "role": "assistant"}, ... ... { "content": "You open the microwave 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
68af6be3-2fb9-4e81-8413-9a42255e9cfb
# Towards Better Human-Agent Alignment: Assessing Task Utility In Llm-Powered Applications ## A Appendix A.1 Task Examples { "content": "ACTION: take apple 1 from diningtable 1\n", "role": "assistant"}, { "content": "You pick up the apple 1 from the diningtable 1.\n", "role": "user"}, { "content": "THOUGHT: I noticed that there is a microwave 1 in the room, i will use it to heat the apple.\n ACTION: go to microwave 1\n", �→ "role": "assistant"}, ... ... { "content": "You open the microwave 1. The microwave 1 is open. In it, you see a egg 1.", "role": "user" }, { "content": "ACTION: heat mug 2 with microwave 1", "role": "assistant"}, { "content": "You heat the mug 2 using the microwave 1.Task success, now reply TERMINATE\n", "role": "user"}, { "content": "TERMINATE", "role": "assistant"}, { "content": "TERMINATE", "role": "user"} }
{ "creation_datetime": "2024-03-04", "file_name": "2402.09015v2.md", "file_path": "paper_data/2402.09015v2.md", "file_size": 63681, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aab52bae-57f9-445b-a329-d617178bf14c
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving Jiaxin Zhang§∗, Zhongzhi Li⋄∗, Mingliang Zhang⋄, Fei Yin⊛, Chenglin Liu⊛†**, Yashar Moshfeghi**§‡ Chinese Academy of Sciences⋄, University of Strathclyde§ {jiaxin.zhang, moshfeghi.yashar}@strath.ac.uk§, {lizhongzhi2022, zhangmingliang2018}@ia.ac.cn⋄, {fyin, liucl}@nlpr.ia.ac.cn⊛
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4783c8e5-19a7-491a-b7e4-bf44a7985d3a
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## Abstract Recent advancements in Large Language Models (LLMs) and Multi-Modal Models (MMs) have demonstrated their remarkable capabilities in problem-solving. Yet, their proficiency in tackling geometry math problems, which necessitates an integrated understanding of both textual and visual information, has not been thoroughly evaluated. To address this gap, we introduce the GeoEval benchmark, a comprehensive collection that includes a main subset of 2000 problems, a 750 problem subset focusing on backward reasoning, an augmented subset of 2000 problems, and a hard subset of 300 problems. This benchmark facilitates a deeper investigation into the performance of LLMs and MMs on solving geometry math problems. Our evaluation of ten LLMs and MMs across these varied subsets reveals that the WizardMath model excels, achieving a 55.67% accuracy rate on the main subset but only a 6.00% accuracy on the challenging subset. This highlights the critical need for testing models against datasets on which they have not been pre-trained. Additionally, our findings indicate that GPT-series models perform more effectively on problems they have rephrased, suggesting a promising method for enhancing model capabilities.1
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2d5b3ce3-a566-4d40-9d00-cc6e72eb0893
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 1 Introduction Geometry math problems are a key component in assessing the mathematical reasoning skills of K12 students, serving as a critical benchmark for evaluating educational outcomes (Zhang et al., 2023c). The complexity of solving these problems stems from the requirement to interpret both textual and visual information, in addition to applying mathematical reasoning skills. This complexity has made ∗ Equal Contribution † Corresponding Author ‡ Corresponding Author geometry problem-solving a key area of interest for researchers aiming to evaluate the capabilities of AI models in this domain (Chou and Gao, 1996; Ye et al., 2008; Zhang et al., 2023a; Trinh et al., 2024; Zhang et al., 2024). In recent years, several datasets, such as Geometry3K (Lu et al., 2021), PGPS9K (Zhang et al., 2023b), and GeomVerse (Kazemi et al., 2023), have been developed to test the proficiency of AI models in solving geometry math problems. Yet, these datasets often lack a standardized format and sufficient diversity, complicating the assessment of models' genuine proficiency in geometry problemsolving. Furthermore, these datasets typically focus on one type of geometry problem, such as flat geometry, overlooking other crucial areas like solid geometry. This oversight limits the ability to conduct a thorough evaluation across the full spectrum of geometry problems. Simultaneously, advancements in large language models (LLMs) and multi-modal models (MMs) have demonstrated significant potential in handling complex reasoning tasks (Chen et al., 2022b; Wei et al., 2022; Zhang et al., 2023d). This potential has raised considerable interest in testing these advanced models across a variety of tasks, such as math word problem solving (Lu et al., 2023) and physical problem solving (Sawada et al., 2023). Despite this interest, specific research on evaluating these models' effectiveness in geometry problemsolving remains scarce. Therefore, it is critical to develop a new, comprehensive benchmark that can effectively assess LLMs and MMs in geometry problem-solving, especially considering the potential exposure of existing public datasets during model training (Sainz et al., 2023). Comparing the performance of current LLMs and MMs on such a benchmark is essential, as it could yield valuable insights that further the development of models capable of tackling complex reasoning tasks. To prompt research towards assessing LLMs' and M
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
25083b77-6e2f-4297-bca6-139a283baa4a
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 1 Introduction awada et al., 2023). Despite this interest, specific research on evaluating these models' effectiveness in geometry problemsolving remains scarce. Therefore, it is critical to develop a new, comprehensive benchmark that can effectively assess LLMs and MMs in geometry problem-solving, especially considering the potential exposure of existing public datasets during model training (Sainz et al., 2023). Comparing the performance of current LLMs and MMs on such a benchmark is essential, as it could yield valuable insights that further the development of models capable of tackling complex reasoning tasks. To prompt research towards assessing LLMs' and MMs' proficiency in geometry math problemsolving, we introduce the GeoEval benchmark, a comprehensive collection specifically designed for this task. GeoEval features its Comprehensive Variety, sourced from seven public datasets and formatted uniformly to encompass a wide range of geometric shapes. It includes *Varied Problems*, covering flat, solid, and analytic geometry to challenge models comprehensively. GeoEval supports Dual Inputs, accommodating both diagrammatic and textual problem representations, making it suitable for evaluating both LLMs and MMs. To counter the potential overfitting to previously seen datasets, GeoEval introduces *Diverse Challenges* through backward reasoning, augmented, and hard subsets, each designed to test different aspects of models' geometry problem-solving abilities. Additionally, GeoEval is annotated with *Complexity Ratings*, allowing for a fine-grained analysis of model performance across various difficulty levels, thus providing a robust framework for advancing AI capabilities in understanding and solving geometry math problems. Examples of geometry problems from our GeoEval can be found in Figure 1. In this paper, we conduct extensive experiments using the GeoEval benchmark to evaluate the proficiency of 10 LLMs and MMs in solving geometry problems. This includes three LLMs: CodeGen2- 16B (Nijkamp et al., 2023), GPT-3.5 (OpenAI, 2022), and GPT-4 (OpenAI, 2023); two LLMs specialized in mathematics: WizardMath-70B and WizardMath-7B-V1.1 (Luo et al., 2023); and five MMs: llava-7B-V1.5 (Liu et al., 2023), Qwen-VL (Bai et al., 2023b), mPLUG-Owl2 (Ye et al., 2023), InstructBLIP (
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2c706dcd-a413-4c6a-914f-d906d3b6d8c9
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 1 Introduction 16B (Nijkamp et al., 2023), GPT-3.5 (OpenAI, 2022), and GPT-4 (OpenAI, 2023); two LLMs specialized in mathematics: WizardMath-70B and WizardMath-7B-V1.1 (Luo et al., 2023); and five MMs: llava-7B-V1.5 (Liu et al., 2023), Qwen-VL (Bai et al., 2023b), mPLUG-Owl2 (Ye et al., 2023), InstructBLIP (Dai et al., 2023), and GPT-4V (OpenAI, 2023). The findings reveal that GeoEval forms a challenge benchmark, with both LLMs and MMs struggling to resolve its complexities effectively. Notably, our results indicate that: 1 Models pre-trained on mathematical corpora, such as the WizardMath models, deliver superior performance across various GeoEval subsets (Section 4.3.1), establishing new benchmarks in the field. 2 One advantages of these models is that they implicitly encompass required mathematical knowledge demanded to solve the geometry math problems (Section 4.6). 3 However, we also find that through pre-training on a mathematical corpus is crucial for solving geometry math problems, it may not be enough (Section 4.3.4). 4 Additionally, we observe that GPT series models exhibit enhanced problem-solving efficiency when tackling geometry questions that they have previously rephrased (Section 4.3.4). 5 Further analyses underscore the value of incorporating descriptions of geometric diagrams, which significantly aids LLMs in understanding and solving geometry problems (Section 4.5). 6 Finally, our experiments show that both performances LLMs and MMs decline as the problem length and complexity of the problem increases (Section 4.7). Through the GeoEval benchmark, we believe this research provides the first comprehensive quantitative assessment of the latest LLMs and MMs in the domain of geometry problem-solving.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ed52338f-52aa-4aa5-ab44-5aaec4ff8fdf
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 2 Related Work Numerous benchmarks have been developed to assess the capabilities of LLMs in geometry problemsolving task. However, these benchmarks face limitations, such as restricted access, like GEOS (Seo et al., 2015) and GeoShader (Alvin et al., 2017) datasets, or insufficient scale, as seen with GEOS++ (Sachan et al., 2017). Although recent efforts have introduced new benchmarks like Geometry3K (Lu et al., 2021), UniGeo (Chen et al., 2022a), and PGPS9K (Zhang et al., 2023b), they still fall short in offering a uniform format and embracing a wide range of problem types. In response, we introduce the GeoEval benchmark, featuring both comprehensive and challenging, aiming to advance the evaluation of geometry problem-solving abilities. Recently, LLMs (Peng et al., 2023; Touvron et al., 2023; OpenAI, 2022) and multi-modal models (MMs) (Liu et al., 2023; Ye et al., 2023; OpenAI, 2023) have achieved impressive results on complex tasks, attracting research into their performance across specialized tasks. Previous work like MathVista (Lu et al., 2023) have concentrated on scientific domains, likewise SEED (Li et al., 2023) explores models' understanding of temporal and spatial relationships. Despite these advancements, there remains a gap in the examination of models' ability in solving geometry math problems. Through the GeoEval benchmark, we aim to fill this gap by offering a detailed assessment of both LLMs' and MMs' abilities to tackle a variety of geometry math challenges.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
10cf82df-075a-4fe9-ba24-f5ab1bb316e5
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 3 Geoeval Dataset The GeoEval benchmark is structured into four subsets: GeoEval-2000, comprising 2,000 problems; GeoEval-backward, with 750 problems; GeoEvalaug, containing 2,000 problems; and GeoEval-hard, including 300 problems. The subsequent sections will detail the collection process for each individual subset, followed by an explanation of the unique features of the GeoEval benchmark.2
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c7444c7b-a19b-4680-8f41-aea8b289c52f
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 3.1 Data Collection 3.1.1 Collection From Diverse Data Sources We have compiled a comprehensive collection of public geometry math problem datasets, with a total of 24,912 geometry math problems from sources such as Geometry3K (Lu et al., 2021), PGPS9K (Zhang et al., 2023b), UniGeo (Chen et al., 2022a), GeoQA+ (Cao and Xiao, 2022), GeometryQA (Tsai et al., 2021), as well as geometry problems from the MATH (Hendrycks et al., 2021) and MathQA (Amini et al., 2019) datasets. The first four datasets feature geometry questions that include both problem texts and geometric diagrams, whereas the latter three datasets comprise questions that only contain problem texts. Detailed information about all source datasets is available in Appendix B. Building on the data gathered, we then selected 2,000 geometry math problems to create our GeoEval-2000 subset. This selection process was guided by the aim to inclusively cover a wide range of basic geometric shapes, ensuring a broad representation of geometry concepts. The distribution of geometric shapes within this subset is further detailed in Appendix C. 3.1.2 Backward Data Generation In contrast to forward problems, backward problems use the answer from forward problems as a starting point, posing a query to determine a specific number that was part of the forward problems but is concealed in the backward problems (Jiang et al., 2023). These types of questions are particularly effective in assessing models' capability for multi-step reasoning. Following the methodology of previous research (Yu et al., 2023), we selected 750 problems from the GeoEval-2000 subset and created corresponding backward questions. This process involved masking a number, the solution of the forward problems, as "X". The prompt "The correct answer is ansgold. Now please answer what is the value of X?", where ansgold represents the correct answer to the forward problems, is then added. The example of backward problems can be found in Appendix D. 3.1.3 Augmented Data Generation To evaluate the resilience of current models and mitigate the risk of data leakage that may occur during the pre-training phase, we implement a context learning strategy for rephrasing problems from the GeoEval-2000 subset. Each problems is rephrased into five variant candidates by GPT-3.5 (OpenAI, 2022), ensuring they retain the original problems's semantic essence while varying
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
14115bb5-4b77-40be-8af9-58cafa7d8b81
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 3.1 Data Collection 3.1.1 Collection From Diverse Data Sources what is the value of X?", where ansgold represents the correct answer to the forward problems, is then added. The example of backward problems can be found in Appendix D. 3.1.3 Augmented Data Generation To evaluate the resilience of current models and mitigate the risk of data leakage that may occur during the pre-training phase, we implement a context learning strategy for rephrasing problems from the GeoEval-2000 subset. Each problems is rephrased into five variant candidates by GPT-3.5 (OpenAI, 2022), ensuring they retain the original problems's semantic essence while varying in lexical structure. Out of these five alternatives, one is selected randomly to substitute the original problems, forming the GeoEval-aug subset. 3.1.4 Hard Data Collection While the GeoEval-2000 subset comprises geometry problems from a variety of source datasets, it exhibits a lack of diversity in problem categories, notably in solid geometry and analytic geometry. To enhance the diversity of problem categories, we introduce the GeoEval-hard subset, which includes 300 geometry problems specifically focusing on solid geometry and analytic geometry, providing a broader assessment scope. More details regarding to the comparison between GeoEval-hard subset with other datasets are in Appendix E. The creation of the GeoEval-hard subset begins with web scraping to gather approximately 10,000 authentic geometry problems from online resources. An initial selection is made using a rulebased engine equipped with a keyword list, targeting solid and analytic geometry problems. This step yields around 3,100 potential problems, identi- Dataset Comprehensive Variety Varied Problems Dual Inputs Diverse Challenges Complexity Ratings MathQA (Amini et al., 2019) n/a flat text ✗ ✗ GeometryQA (Tsai et al., 2021) n/a flat text ✗ ✗ Geometry3K (Lu et al., 2021) n/a flat text + diagram ✗ ✗ GeoQA+ (Cao and Xiao, 2022) n/a flat text + diagram ✗ ✗ MATH (Hendrycks et al., 2021) n/a flat text ✗ ✗ Uni
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9eb85f4e-c9a9-421c-9da1-0ef763dce411
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 3.1 Data Collection 3.1.1 Collection From Diverse Data Sources n/a flat text ✗ ✗ GeometryQA (Tsai et al., 2021) n/a flat text ✗ ✗ Geometry3K (Lu et al., 2021) n/a flat text + diagram ✗ ✗ GeoQA+ (Cao and Xiao, 2022) n/a flat text + diagram ✗ ✗ MATH (Hendrycks et al., 2021) n/a flat text ✗ ✗ UniGeo (Chen et al., 2022a) n/a flat text + diagram ✗ ✗ PGPS9K (Zhang et al., 2023b) n/a flat text + diagram ✗ ✗ GeomVerse (Kazemi et al., 2023) n/a flat text + diagram ✗ ✓ MathVista (Lu et al., 2023) 4 flat text + diagram ✗‡ ✗ GeoEval 7 + 3 (new) flat, solid, analytic text + diagram ✓ ✓ fied as GeoEval-hard-raw. A manual review further narrows these down to 300 problems specifically related to solid and analytic geometry. The cleaning and manual inspection process, is documented in Appendix F.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b30b32bd-64d2-417d-a42b-c31e376c54bc
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 3.2 Features Of Geoeval The GeoEval benchmark is specifically designed for assessing the ability of models in resolving geometric math problems. This benchmark features five characteristics: Comprehensive Variety, Varied Problems, Dual Inputs, *Diverse Challenges*, and Complexity Ratings, with each attribute exemplified in the Appendix G. For an insightful contrast, Table 1 offers a comparative analysis of GeoEval against earlier datasets. Comprehensive Variety GeoEval consists of a diverse collection of geometry problems sourced from seven most recent datasets. Therefore, the problems in GeoEval cover a wide range of geometric shapes, offering a comprehensive view of varied geometry math challenges. Varied Problems The GeoEval benchmark encompasses three distinct categories of geometry math problems, namely flat geometry, solid geometry, and analytic geometry. Dual Inputs GeoEval features problems in two formats: those accompanied by diagrams and those consisting solely of text. This versatility makes it suitable for evaluating models that process either digrams or text-based inputs. Diverse Challenges In addition to gathering public datasets, GeoEval also generates its own out-ofdistribution dataset aimed at addressing data leakage problems. This includes a backward reasoning subset, an augmented subset, and a hard subset, all created by us. Complexity Ratings GeoEval is equipped with annotations indicating the complexity level for each problem, serving as a guideline to evaluate models' proficiency in solving these tasks.3
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ef4e4049-f0f0-4f08-aa70-56d07af88015
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4 Experiments 4.1 Experimental Setup In this study, we deliberately select state-of-the-art LLMs and MMs that are widely recognized for their advanced capabilities, including: - **LLMs Specialized in Programming Code**: We include CodeGen2-16B model (Nijkamp et al., 2023), which is renowned for its proficiency in understanding and generating programming code, offering insights into its adaptability to solve geometry math problems. - **LLMs with a Focus on Mathematics**: This includes WizardMath-7B-V1.1 and WizardMath-70B (Luo et al., 2023), explicitly pre-trained on mathematical corpora. Their inclusion allows for an assessment of models that have been fine-tuned to tackle complex mathematical problems. - LLMs Designed for a Broad Range of Topics: Models such as GPT-3.5 (OpenAI, 2022) and GPT-4 (OpenAI, 2023) exemplify the advanced commercial LLMs engineered to encompass a broad range of topics. - Multi-Modal Models (MMs) with Diverse Decoders: Given the ubiquity of ViT architecture (Dosovitskiy et al., 2021) as the vision encoder in MMs, we select models that integrate ViT with various LLMs as decoders. This includes llava-7B-V1.5 (Liu et al., 2023) with Vicuna (Peng et al., 2023), Qwen-VL (Bai et al., 2023b) using Qwen (Bai et al., 2023a), mPLUG-Owl2 (Ye et al., 2023) with LLaMA (Touvron et al., 2023), InstructBLIP (Dai et al., 2023) with Vicuna (Peng et al., 2023), and GPT-4V (OpenAI, 2023). These models are evaluated through a zeroshot approach, utilizing straightforward instruction prompts to directly assess their geometry problemsolving capabilities without further fine-tuning specifically for our benchmark.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6187085c-03ec-49f9-a802-3f4ce29b46c8
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.2 Evaluation Metric Building upon the approach by MathVista (Lu et al., 2023), we first input the generated sequence from the model into GPT-4 to extract the target value or option letter. To enhance the precision of our answer extraction, we formulate intricate rules for post-processing the outcomes in cases where GPT- 4 falls short. This approach has enabled us to attain an extraction accuracy surpassing 97%, similar to the success rate reported in MathVista (Lu et al., 2023). Details on the crafted prompts and the extraction guidelines are available in Appendix J. The extracted results are compared against the golden answers to determine the final performance metric. Given the model's intention to produce responses in varying formats, either as the precise answer (for instance, "3.15") or as the corresponding option letter (such as "A"), we regard a prediction as accurate if it either matches the golden answer or the golden option letter.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ab5066f4-646b-41c2-a7eb-ae1e1499536d
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3 Experimental Results In this section, we present the accuracy achieved by models on our GeoEval benchmark. Table 2 highlights that models pre-trained on a math-specific corpus tend to outperform others. Furthermore, except for llava-7B-V1.5 and Qwen-VL, multi-modal models (MMs) generally exceed the performance of large language models (LLMs). Notably, InstructBLIP exhibits exceptionally high accuracy scores across all subsets, yet its results raise some concerns, and we have chosen to exclude the InstructBLIP model. The rationale behind this decision is detailed in Appendix K.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7ebdbdc1-b1a8-4273-ba5b-fb4e888f892b
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms When reviewing the performances of LLMs as detailed in Table 2, it becomes evident that models pre-trained on mathematical corpora demonstrate superior efficacy in solving geometry math problems compared to those trained on general corpora. Specifically, evaluating on all problems of GeoEval-2000 subset (marked as "A" in the table), WizardMath-70B leads with an accuracy of 55.67%, while WizardMath-7B-V1.1 closely follows with a 54.78% accuracy, outperforming other LLMs. Conversely, GPT-4, GPT-3.5, and CodeGen2-16B report notably lower accuracies, all under 30.00%. Focusing on questions solely based on problem text within the GeoEval-2000 subset (indicated as "T" in the table), GPT-4 emerges as the frontrunner, securing the highest accuracy of 43.86%, with WizardMath models also surpassing the 32.00% accuracy. These findings underscore the enhanced proficiency of models pre-trained on math-specific corpora in tackling geometry math problems, particularly when problems are welldescribed textually, as evidenced by GPT-4's leading performance. In the GeoEval-backward subset, WizardMath- 7B-V1.1 excels with the highest accuracy of 32.66%, closely followed by WizardMath-70B at 28.66%. This significant drop in performance across all LLMs, compared to the GeoEval-2000 results, highlights a collective weakness in backward reasoning capabilities. For the GeoEval-aug subset, WizardMath-7B-V1.1 again tops the leaderboard with an accuracy of 47.75%, with GPT-4 not far behind at 45.75% accuracy. Lastly, within the GeoEval-hard subset, all models, excluding GPT- 3.5, exhibit relatively low accuracies, indicating a broad difficulty in addressing the most challenging solid geometry and analytic geometry problems. | | | GeoEval-2000 | GeoEval-backward | GeoEval-aug | GeoEval-hard | |--------------------|-------|----------------|--------------------|---------------|----------------| | Model | A
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b48f6a19-5464-4c18-a3d5-9b3dd93c0989
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms models, excluding GPT- 3.5, exhibit relatively low accuracies, indicating a broad difficulty in addressing the most challenging solid geometry and analytic geometry problems. | | | GeoEval-2000 | GeoEval-backward | GeoEval-aug | GeoEval-hard | |--------------------|-------|----------------|--------------------|---------------|----------------| | Model | A (%) | T (%) | A (%) | A (%) | A (%) | | CodeGen2-16B | | | | | | | ♢ | | | | | | | 28.76 | 22.06 | 5.10 | 8.50 | 5.66 | | | GPT-3.5 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a1e7ab8-72b3-4dc6-b3ac-7ee89be2f590
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms | 22.06 | 5.10 | 8.50 | 5.66 | | | GPT-3.5 | | | | | | | ♢ | | | | | | | 24.71 | 21.27 | 22.66 | 41.25 | 22.33 | | | GPT-4 | | | | | | | ♢ | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9d328ea8-a120-4e1c-8172-fa951c1f95cb
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms | | | | ♢ | | | | | | | 27.95 | 43.86 | 26.00 | 45.75 | 10.10 | | | WizardMath-70B | | | | | | | ♢ | | | | | | | 55.67 | | | | | | | 34.20
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
13520e04-16f9-42d3-81a4-943c5a71ee35
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms | | | 55.67 | | | | | | | 34.20 | 28.66 | 37.75 | 6.00 | | | | WizardMath-7B-V1.1 | | | | | | | ♢ | | | | | | | 54.78 | 32.76 | 32.66 | | | | | 47.75 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4b27fd8e-ae5c-46fc-b43c-bbd8ffecc04c
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms | 54.78 | 32.76 | 32.66 | | | | | 47.75 | | | | | | | 6.00 | | | | | | | llava-7B-V1.5 | 12.80 | 21.01 | 11.33 | 20.25 | 20.30 | | Qwen-VL | 25.60 | 25.97 | 5.66 | 22.25 | 21.66 | | mPLUG-Owl2 | 37.76 | n/a | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
79afc667-1038-40f3-9a1c-aed23941f58b
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms | 5.66 | 22.25 | 21.66 | | mPLUG-Owl2 | 37.76 | n/a | | | | | 35.33 | | | | | | | 38.00 | | | | | | | 22.66 | | | | | | | InstructBLIP | | | | | | | †
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d328b40e-9458-4a29-9894-15aad119f4c3
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms | | | InstructBLIP | | | | | | | † | | | | | | | 52.18 | n/a | 15.66 | 35.00 | 70.30 | | | GPT-4V | | | | | | | 37.22 | | | | | | | 43.86 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
281b0ae2-c20a-4a2c-94fc-bbd103f1e32c
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.1 Comparison Among Llms | | 37.22 | | | | | | | 43.86 | | | | | | | ‡ | | | | | | | 26.00 | 45.75 | 10.10 | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
24e7f238-096c-4f40-97b7-61f18bc228ad
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.2 Comparison Among Multi-Modal Models Table 2 shows that among the MMs, GPT-4V and mPLUG-Owl2 consistently outperform their counterparts across all subsets. Specifically, within the GeoEval-2000 subset, mPLUG-Owl2 leads with an accuracy of 37.76%, closely followed by GPT-4V at 37.22%, with the remaining MMs fall behind at lower accuracies. Specifically, Qwen-VL and llava-7B-V1.5 achieve accuracies of 25.60% and 12.80%, respectively. When examining problems that only involve texts, GPT-4V achieves a 43.86% accuracy, significantly surpassing llava-7B-V1.5 (21.01%) and Qwen-VL (25.97%). In the GeoEval-backward subset, mPLUG-Owl2 tops with the accuracy of 35.33%, with GPT-4V following at 26.00% accuracy. This performance shows a notable lack in backward reasoning skills, as illustrated by the diminished results of llava- 7B-V1.5 and Qwen-VL in this category. Moving to the GeoEval-aug subset, GPT-4V leads with an impressive 45.75% accuracy, with mPLUG- Owl2 at the second place with 38.00% accuracy. Both Qwen-VL and llava-7B-V1.5 show comparable performances in this subset. Lastly, within the GeoEval-hard subset, mPLUG-Owl2 demonstrates the highest efficacy with a 22.66% accuracy, closely followed by Qwen-VL and llava-7B-V1.5. Surprisingly, GPT-4V records a lower accuracy of just 10.10%, highlighting the challenging nature of GeoEval-hard subset and the varied capabilities of MMs in addressing the most difficult problems.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a310ad06-aff7-4f57-92a1-5293d945e2ae
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.3 Comparison Between Llms And Multi-Modal Models In the GeoEval-2000 subset, specifically for problems that only include texts, GPT-4's performance exceeds the top MMs, Qwen-VL, by 17.89%. This is attributed to the MMs' inability to access geometric diagrams, which likely hinders their comprehension of the problems. Moreover, when evaluating across all problems of the GeoEval-2000 subset, WizardMath-70B surpasses the best MMs, Qwen-VL, by 17.91% in accuracy. However, MMs like GPT-4V and mPLUG-Owl2 achieve significantly higher accuracy than LLMs not pre-trained on mathematical content. This underscores the value of mathematical pre-training for excelling in geometry problem-solving. Notably, GPT-4V's accuracy on all GeoEval-2000 problems is 9.27% higher than GPT-4's, suggesting GPT-4V's superior capability in solving geometry problems with diagrams. This pattern persists in the GeoEval-aug subset, where WizardMath-7B-V1.1, a model trained on a mathematical corpus, achieves the highest accuracy at 47.75%. Conversely, mPLUG-Owl2 leads in the GeoEval-backward and GeoEval-hard subsets, with accuracies of 35.33% and 22.66%, respectively. Given that GeoEval-aug rephrases questions from GeoEval-2000, it implies both subsets might have been exposed to the models during their pre-training phase. In contrast, GeoEvalbackward and GeoEval-hard subsets are less likely to have been previously exposed. This suggests that WizardMath-7B-V1.1 excels with familiar geometry math problems, while mPLUG-Owl2 demonstrates a robust capability in tackling unseen geometry problems. This is further evidenced by the low performance of WizardMath models on the GeoEval-hard subset, where both models only achieve an accuracy of 6.00%.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b817dc79-06f0-4136-80fe-117f74887fea
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.3.4 Analysis On The Best Model Table 2 shows that GPT-4, the leading LLMs, records the highest accuracy on the GeoEval-aug subsets, though it only secures a 27.95% accuracy on the GeoEval-2000 subset. A similar pattern of improvement is noted for the GPT-3.5 model, which sees its accuracy jump from 24.71% on the GeoEval-2000 subset to 41.25% on the GeoEvalaug subset. This improvement aligns with the involvement of GPT-3.5 in generating the GeoEvalaug subset, suggesting that the capabilities of GPT- 3.5 and GPT-4 in addressing geometry math problems significantly benefit from their use in rephrasing geometry question texts. While WizardMath-70B and WizardMath-7B- V1.1, both pre-trained on a mathematical corpus, demonstrate superior performance on the GeoEval- 2000 subset, they show a marked decline in accuracy across the other subsets, with the most significant decreases observed on the GeoEval-hard subset. This indicates that although pre-training on a mathematical corpus is crucial for solving geometry math problems, it may not be enough. In contrast to the significant variances in accuracy observed among LLMs across different subsets, the top-performing multi-modal model, mPLUG-Owl2, maintains relatively stable accuracies with scores of 37.76% on the GeoEval-2000, 35.33% on the GeoEval-backward, and 38.00% on the GeoEval-aug subsets. Additionally, the performance of GPT-4V on the GeoEval-aug subset surpasses its accuracy on the GeoEval-2000 subset, mirroring the trends observed with GPT-4 and GPT- 3.5, further illustrating the enhanced effectiveness of GPT-series models when engaged in rephrasing the content of geometry questions.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1325a1ae-958c-43dc-a6c1-4677fc640a62
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.4 Results Across Different Subjects Figure 2 displays the performance of models across various subjects, revealing distinct strengths. The WizardMath-7B model significantly outperforms others in flat geometry problems, such as length and lines. Conversely, in solid geometry problems like cuboids and spheres, GPT-4V surpasses WizardMath-7B, indicating its superior capability in addressing solid geometry questions.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a809b34c-f678-4681-8903-7ec06993c3d6
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.5 Benefit From The Geometric Diagram Descriptions Models ✗ ✓ GPT-4V 40.28 45.61 (*+5.33*) WizardMath-7B 38.10 56.83 (*+18.73*) To assess the impact of including geometric diagram descriptions on models' ability to comprehend geometric diagrams and solve related problems, we selected a sample of 300 questions with geometric diagram descriptions from the GeoEval- 2000 subset. We then evaluated the performance of two models, GPT-4V and WizardMath-7B-V1.1, on these questions, both with and without the use of geometric diagram descriptions. The results in Table 3 indicate that GPT-4V's accuracy decreases by 5.33% without the diagram descriptions. More significantly, WizardMath-7B's accuracy falls by 18.73% in the absence of these descriptions. This evidence suggests that supplemental geometric diagram descriptions significantly enhance models' efficiency in solving geometry math problems, particularly benefiting LLMs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fa6224a6-eb28-4eaf-b5b2-6126e3c5cd76
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.6 External Knowledge Required In the GeoEval benchmark, certain questions require external knowledge, such as the value of π, which is not typically included in the problem text. This necessitates models to have pre-existing knowledge to accurately solve these problems. Figure 3 assesses the performance of four models on problems differentiated by the need for external knowledge, identified through a heuristic approach that classifies problems according to whether its solutions requires constants. Figure 3 shows that the WizardMath-7B-V1.1 model maintains consistent accuracy on GeoEval- 2000 subset, regardless of the requirement for external knowledge, unlike other models, which perform better on problems without such requirements. This consistency in WizardMath-7B-V1.1's performance is likely due to its pre-training on a mathspecific corpus, providing it with the necessary knowledge to resolve geometry math problems effectively. In contrast, models trained on general corpora may not possess this specialized mathematical knowledge, hindering them from solving the problems correctly.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0bdfc738-31f0-4675-a41d-0a7586546e33
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 4.7 Performances According To Different Problem Lengths And Varied Complexities Figure 4 shows how models perform with inputs of different lengths. Performance slightly varies for problems ranging from 80 to 100 characters, but there's a clear trend of decreasing accuracy as problem length increases. This is expected, as longer questions typically involve more complex geometry math problems, challenging the models more as the length grows. The figure also points out that the WizardMath-7B-V1.1 model is notably more adept at handling longer questions, with GPT- 4V and GPT-4 showing relatively stable accuracy for increased question lengths. On the other hand, GPT-3.5 and CodeGen2-16B perform less effectively on lengthy questions. Upon the analysis in Figure 5, similar to the observations made in Figure 4 regarding input lengths, we delve into the models' performances as they relate to the complexity of geometry math problems. Figure 5 presents the performance of models across varying levels of problem complexity. It is evident that as the complexity of geometry problems escalates, the accuracy of the models correspondingly diminishes.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
db8258f2-82c6-4a20-a6f2-b510e3b8072d
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 5 Conclusion In this study, we present GeoEval, a benchmark developed to assess the geometry problem-solving capabilities of large language models (LLMs) and multi-modal models (MMs). GeoEval comprises four distinct subsets, each designed to facilitate a thorough evaluation. Through our assessment of ten cutting-edge LLMs and MMs using the GeoE- val benchmark, we underscore the critical role of mathematical corpus pre-training for effective geometry problem resolution. This is exemplified by the WizardMath model's leading performance on the GeoEval-2000 subset, achieving an accuracy of 55.67%. However, the WizardMath model's challenges with GeoEval-hard subset suggest a need for enhanced reasoning skills. Additionally, our analysis reveals that GPT-series models exhibit improved performance on geometry problems they have rephrased, pointing to the potential benefits of self-rephrasing in problem-solving.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
04df791e-382e-4af3-a491-2b7eab21f263
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## 6 Limitations This study, while providing significant insights into the capabilities of large language models (LLMs) and multi-modal models (MMs) in solving geometry problems, has certain limitations. One primary constraint is that our evaluation predominantly focuses on quantitative metrics of accuracy, potentially overlooking qualitative aspects of model reasoning and explanation that are crucial for educational applications. The performance of models on the hard subset also highlights a gap in advanced reasoning abilities, suggesting that current LLMs and MMs, including those pre-trained on mathematical corpora, may still struggle with highly complex or novel problem types. Moreover, this work reveals the effectiveness of rephrased problems by GPT-series models suggests a specific interaction effect that may not generalize across all types of geometry problems or other LLMs and MMs, indicating a need for broader research to fully understand the implications of rephrasing on model performance.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e67e1e70-4772-43c7-8513-9dd58aa1d888
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## A Statistic Analysis Table 4 presents a statistical breakdown of the Geo- Eval benchmark. This benchmark encompasses a total of 5,050 geometry math problems, categorized into four subsets: GeoEval-2000 (2,000 problems), GeoEval-backward (750 problems), GeoEval-aug (2,000 problems), and GeoEval-hard (300 problems). Besides the problem text, each problem in the dataset includes at least one of the following: a geometric diagram, a description of the diagram, or both. The majority of the correct answers are numerical values, with a minority comprising expressions, coordinates, or option letters, primary in the GeoEval-hard subset. Total Numbers - GeoEval-2000 2,000 - GeoEval-backward 750 - GeoEval-aug 2,000 - GeoEval-hard 300 Input Types - text + description 1,120 - text + diagram 1,120 - text + description + diagram 1,166 Answer Types - number 5,050 - expression 232 - coordinate 68 Problem Types - flat geometry 5,050 - solid geometry 272 - analytic geometry 28 Others - average problem length 28 - average description length 34 - geometry shapes 12
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4bc0ff03-180a-425b-8407-0bce666f8d7c
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## B Source Datasets Table 5 provides details on the source datasets that contribute to the GeoEval-2000 subset, including information on their content and characteristics. Meanwhile, Figure 6 visualizes the proportional contributions of these source datasets to the GeoEval-2000 subset, showcasing the variety and scope of the geometry problems collected from each source. | Source Dataset | Diagram | |------------------|-----------| | Geometry3K | | | ✓ | ✓ | | 3001 | | | PGPS9K | | | ✓ | ✓ | | 9022 | | | UniGeo | | | ✓ | ✗ | | 4998 | | | † | | | GeoQA+ | | | ✓ | ✗ | | 2518 | | | GeometryQA | | | ✗ | ✗ | | 13
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
081b1204-c56f-4b64-9675-dd6968259922
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## B Source Datasets | | | ✓ | ✗ | | 2518 | | | GeometryQA | | | ✗ | ✗ | | 1398 | | | MATH | | | ✗ | ✗ | | 1349 | | | ‡ | | | MathQA | | | ✗ | ✗ | | 2625 | | | ‡ | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
24682c88-8758-4165-9e91-6882e419253d
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## C Distributions Of Different Geometric Shapes Figure 7 illustrates the varied distribution of geometric shapes within the GeoEval-2000 subset, highlighting the diversity of geometry concepts represented in this collection.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
210d12e2-a4f0-4481-9a8a-f6261cf7c1e6
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## D Backward Question Example Figure 8 is an example from the GeoEval-backward subset.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f7a97546-c741-4900-82c6-85662ef2f345
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## E Comparison Between Geoeval-Hard Subset And Other Public Datasets To thoroughly assess the models' abilities in grasping concepts of solid and analytic geometry, the GeoEval-hard subset was created to include a diverse range of visual elements, such as threedimensional views, across a spectrum of topics in solid geometry. The distinctions between the GeoEval-hard subset and other publicly available datasets are detailed in Table 6, demonstrating the unique coverage and complexity of the GeoEvalhard subset in comparison.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f18a86d4-e1aa-44f4-beed-56cb6d6f9f09
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## F Inspection Of Geoeval-Hard Subset To ensure the GeoEval-hard dataset's high quality and accuracy, we form a team of six reviewers, each holding at least a Master's degree, to scrutinize every question. This evaluation process is structured in three phases: individual review, swap review, and candidate review. The primary focus lies on two key standards: the completeness and relevance of the geometric diagrams, and the reasonableness of the answers provided. In the first phase, "individual review", each reviewer is randomly assigned 50 geometry math problems from the GeoEval-hard dataset. Their task is to assess the geometry math problems based on the standards, marking any that fail to meet these standards. During the "swap review" phase, these sets of 50 geometry math problems are exchanged among reviewers for a second evaluation. To ensure unbiased assessment, we hide the results of the initial review. Here, reviewers again highlight geometry math problems not conforming to the standards. The final phase, "candidate review", involves selecting geometry math problems for the dataset based on the outcomes of the first two phases. Geometry math problems unmarked in Dataset Solid Geometry Analytic Geometry #solid geometry shapes #question type #number of knowledge points #geometry curve knowledge #question types #grade UniGeo (Chen et al., 2022a) ✗ caculate/prove – ✗ – 6-12 GeoQA (Cao and Xiao, 2022) ✗ caculate – ✗ – 6-12 Geometry3K (Lu et al., 2021) ✗ caculate – ✗ – 6-12 PGPS9K (Zhang et al., 2023b) ✗ caculate/judge – ✗ – 6-12 MathVista(Geometry Part) (Lu et al., 2023) ✗ caculate/judge – ✗ – – MathVista(FunctionQA Part) (Lu et al., 2023) ✗ caculate/judge – ✓ judge – GeoEval-hard ✓ judge/caculate/reason – ✓ judge/caculate/reason 9-
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ed66f9ac-6924-4699-8d91-d914f33a1ce9
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## F Inspection Of Geoeval-Hard Subset � caculate/judge – ✗ – 6-12 MathVista(Geometry Part) (Lu et al., 2023) ✗ caculate/judge – ✗ – – MathVista(FunctionQA Part) (Lu et al., 2023) ✗ caculate/judge – ✓ judge – GeoEval-hard ✓ judge/caculate/reason – ✓ judge/caculate/reason 9-12
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7c78e4a3-8869-4a4f-943d-69f08c9ec894
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## Algorithm 1 Algorithm For Classifying Geometry Math Problems Complexity Require: Problem Texts T, Diagram Descriptions D, Solution Programs S both phases are retained, those marked in both are discarded, and those highlighted in only one phase undergo further examination by the entire review team, with the majority decision determining their inclusion. $len_{T}\leftarrow$ lengths of all $T$ in dataset $+$ lengths of all $D$ $len_{S}\leftarrow$ lengths of all $S$ in dataset $T_{T,D}\leftarrow$ length of descriptions of the given problem $I_{S}\leftarrow$ length of solutions of the given problem
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
50404d85-a711-4e54-9a69-7944add3575c
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## G Examples From Geoeval Representing Five Features **Complexity $C_{I}\leftarrow\alpha\times\frac{(I_{T,D}-\min(len_{T}))}{\max(len_{T})-min(len_{T})}+(1-\alpha)\times I_{S}-\min(len_{S})$**
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0ad289b4-df24-4782-bb89-860cfa973979
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## G.1 Comprehensive Variety $${\frac{I_{S}-\operatorname*{min}(l e n_{S})}{\operatorname*{max}(l e n_{S})-\operatorname*{min}(l e n_{S})}}$$ if 0.0 ≥ CI ≤ 0.2 then CI ← "Easy" Figure 9 present sample data from the GeoEval- 2000 subset, illustrating its diversity in terms of data sources. else if 0.2 ≥ CI ≤ 0.6 then CI ← "Middle" else if 0.5 ≥ CI ≤ 1.0 then
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
acd14533-21cb-47f4-9518-9e431d79b774
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## G.2 Varied Problems CI ← "Hard" end if
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e3246f6b-4cca-4e7f-9832-8bbcefed3ec3
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## I Evaluation Details Figure 10 displays examples of three distinct problem types in the GeoEval benchmark: flat geometry, analytic geometry, and solid geometry.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7b9454bd-d4cc-44ec-8851-1046db87b922
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## I.1 Model Hyperparameters G.3 Dual Inputs Table 7 presents the complete list of hyperparameters applied to the models throughout the evaluation phase. Figure 9 shows that the GeoEval benchmark comprises geometry math problems that contain both diagrams and textual descriptions, as well as problems that include textual descriptions alone.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ec5fb866-c0c2-4da0-a576-0c25dd4db296
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## I.2 Instruction Prompt Used For Evaluating Models G.4 Diverse Challenges Figure 11 showcases examples from the GeoEval- 2000, GeoEval-backward, GeoEval-aug, and GeoEval-hard subsets, illustrating the diverse challenges within the GeoEval benchmark.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
483dae70-9983-4ff4-b7af-3bd0f5db42f7
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## G.5 Complexity Ratings Prior to employing instruction prompts to steer model responses, we combine the problem texts, diagram descriptions, and choice lists from an example, as depicted in the "Merge" row of Table 8. Following this combination, as illustrated in the "Instruction" row of Table 8, we incorporate instruction prompts into the merged texts and then forward these to the models to generate responses. Every problem in the GeoEval benchmark is annotated with a complexity rating, indicating the level of skill necessary to solve it, as shown in Figure 12.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6e01f74d-2ae3-41af-819e-e6ef015fa140
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity Algorithm 1 details our methodology for classifying each geometry math problem into distinct levels of complexity. We detail the prompts utilized for extraction using GPT-4, which include an extraction instruction alongside various sample prompts. The extraction instruction and the constructed samples are presented in Table 9 and Table 10, illustrating the methodology behind the extraction process. | Model Name | Generation Parameters | Comments | |--------------------|--------------------------------------------------------------------------|-------------------------------------------| | CodeGen2-16B | do_sample=True, top_k=0.5, top_p=0.5, max_tokens=512 | model=""Salesforce/codegen2-16B" | | WizardMath-7B-V1.1 | temperature=0.0, top_p=1, max_tokens=1024 | vLLM package | | WizardMath-70B | temperature=0.0, top_p=1, max_tokens=1024 | vLLM package | | GPT-3.5 | temperature=0.7, max_tokens=512
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0a930572-250d-43ce-a74d-4f45979be211
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity 0, top_p=1, max_tokens=1024 | vLLM package | | GPT-3.5 | temperature=0.7, max_tokens=512 | version="gpt-3.5-turbo-0125" | | GPT-4 | temperature=0.7, max_tokens=512 | version="gpt-4-1106-preview" | | llava-7B-V1.5 | temperature=0.0, max_new_tokens=512 | llava package | | Qwen-VL | temperature=0.0, max_new_tokens=512 | model="Qwen/Qwen-VL" | | mPLUG-Owl2 | do
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8d8da12f-60b0-48e2-948d-e8686cda92e5
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | Qwen-VL | temperature=0.0, max_new_tokens=512 | model="Qwen/Qwen-VL" | | mPLUG-Owl2 | do_sample=True, top_p=0.7, max_tokens=512 | model="mPLUG-Owl2" | | InstructBLIP | do_sample=False, num_beams=5, max_tokens=512, top_p=0.9, temperature=1.0 | model="Salesforce/instructblip-vicuna-7b" | | GPT-4V | temperature=0.0, max_tokens=512 | version="gpt-4-vision-preview" | Table 7: The hyperparameters for the models used in the evaluation are detailed. When the "comments" section includes the format *model = ""*, it signifies that the model was loaded from the transformer package. The vLLM package indicates that models are implemented by the vLLM package, where more details can be found in https://github.com/vllm-project/vllm. For models other than OpenAI's GPT, custom codes were utilized for evaluation unless specified otherwise in the comments. | Template
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9d4a01e2-5af4-47c6-8516-782b59980d86
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity hyperparameters for the models used in the evaluation are detailed. When the "comments" section includes the format *model = ""*, it signifies that the model was loaded from the transformer package. The vLLM package indicates that models are implemented by the vLLM package, where more details can be found in https://github.com/vllm-project/vllm. For models other than OpenAI's GPT, custom codes were utilized for evaluation unless specified otherwise in the comments. | Template | |-------------------------------------------------------------------------| | Merge | | Here are the basic description of the diagram: ${diagram descriptions}, | | ${problems texts}, | | The Choices are: ${choice list} | | Please solve this math problem: | | Here are the basic description of the diagram: | | line B A, line C A, line B C\nCA \\perp BC on C, | | BA
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
60d3e9b3-e819-4649-b0f7-4f285903017c
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | Here are the basic description of the diagram: | | line B A, line C A, line B C\nCA \\perp BC on C, | | BA = c, BC = a, AC = b, m \\angle ABC = 60, | | m \\angle BAC = 30\nIf c = 5, | | find b. | | The Choices are: [1.7, 2.6, 3.5, 4.3] | | Instruction | | Please solve this math problem: | | ${Merge}
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
713ac6fb-e3b8-4cf3-96c1-7e89369fbee7
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | Please solve this math problem: | | ${Merge} | | ### Problem-solving Bot: | | Please solve this math problem: | | Here are the basic description of the diagram: | | line B A, line C A, line B C\nCA \\perp BC on C, | | BA = c, BC = a, AC = b, m \\angle ABC = 60, | | m \\angle BAC = 30\nIf c = 5, | | find b.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
df4d1b05-8d10-4a45-885c-de5d58467a5b
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | m \\angle BAC = 30\nIf c = 5, | | find b. | | The Choices are: [1.7, 2.6, 3.5, 4.3] | | ### Problem-solving Bot: | Element Prompt Task description You are a result extraction bot. I will provide you with geometry questions and a model output, and you will help me extract the reference answers from the model's output. Example 1 Question: As shown in the figure, in triangle *ABC*, AB = AC, ∠A = 40◦, DE is the perpendicular bisector of AB. What is the degree measure of ∠DBC? () Choices: (A) 30.0 (B) 20.0 (C) 15.0 (D) 10.0 Model response: Given that ∠A = 40◦, AB = AC, we can conclude that ∠ABC = ∠ACB = 70◦. Also, since DE is perpendicular and bisects AB, we have DB = AD and ∠ABD = ∠A = 40◦. Therefore, ∠DBC = ∠ABC − ∠ABD = 70◦ − 40◦ = 30◦. The answer is option A. Extracted
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4bb8ffeb-63c8-4d6a-8772-8e926f108223
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity ) 10.0 Model response: Given that ∠A = 40◦, AB = AC, we can conclude that ∠ABC = ∠ACB = 70◦. Also, since DE is perpendicular and bisects AB, we have DB = AD and ∠ABD = ∠A = 40◦. Therefore, ∠DBC = ∠ABC − ∠ABD = 70◦ − 40◦ = 30◦. The answer is option A. Extracted answer: A Example 2 Question: As shown in the diagram, the lateral surface area of a cone with a radius of the base r equals 6 cm and a height h equals 8 cm is (). Choices: (A) 94.2 (B) 150.8 (C) 188.5 (D) 251.3 Model response: Given h = 8 cm and r = 6 cm, the length of the generatrix of the cone can be set as l. Using the Pythagorean theorem, we have l = √ 62 + 82 = 10 cm. The lateral area of the unfolded cone is Sside = 0.5 × 2 × 6π × 10 = 60π cm2. Therefore, the lateral area of the cone is 60π cm2. Therefore, the answer is C. Extracted answer: C Example 3 Question: In triangle ABC, F is the midpoint of BC and point E is on the AC side. AC = 10. What is the length of AE? Choices: (A) 3.0 (B) 4.0 (C) 5.0 (D) 4.5 Model response: Since F is the midpoint of BC, EF is parallel to AB, so EF is the median of triangle ABC. Therefore, point E is the midpoint of AC. Therefore, AE = 0.5 × AC. Since AC = 10, AE = 5. Therefore, the answer is C. Extracted answer: C | Regular expressions | Demonstration Examples | |----------------------------------------------------------------|--------------------------|
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
62b583ee-4b9f-4277-b611-9addd259486e
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity midpoint of BC, EF is parallel to AB, so EF is the median of triangle ABC. Therefore, point E is the midpoint of AC. Therefore, AE = 0.5 × AC. Since AC = 10, AE = 5. Therefore, the answer is C. Extracted answer: C | Regular expressions | Demonstration Examples | |----------------------------------------------------------------|--------------------------| | value of (\w+) is\s*([\d.]+) | | | The value of x is 3.5. | | | correct answer is\s*(.+). | | | correct answer is C." | | | answer is\s*([\d.]+) |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3e9cbc7f-e66b-40cc-88fc-81dab65148cb
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | | answer is\s*([\d.]+) | | | answer is 17.1." | | | answer should be\s*(.+) degrees | | | Therefore, the answer should be choice D." | | | answer to (.+) is (.+) degrees | | | The answer to the angle ABC is | | | 60
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0b723bd9-3a6a-469d-8fe6-813c3ad6232b
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | The answer to the angle ABC is | | | 60 | | | ◦ | | | answer to the problem is\s*(.+) | | | The correct answer to problem is | | | y | = | | 2
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f764b17d-6788-42dd-b468-1e65bf063627
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | | y | = | | 2 | | | | | | + 2 | x | | ." | | | The closest (.+) is (.+).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
22097f0f-5717-4930-a411-693192525fca
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## J Prompt & Heuristic Rules For Answer Extraction H Algorithm For Classifying Geometry Math Problems Complexity | | ." | | | The closest (.+) is (.+). | | | So we got the area is 13.1. The closest answer is D." | | | the (.+) is equal to (.+). | | | The degree measure of angle ABC is 35 degrees. | | | (.+) is approximately (.+) units | | | So, the length of the line segment is approximately 10 units." | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8e4a9ca3-1233-465b-b5b9-c56695369993
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## K Reason For Removing Instructblip From The Comparison As shown in Figure 13, InstructBLIP's responses on the GeoEval-2000 subset are typically scalar, lacking any intermediate reasoning steps. This suggests that InstructBLIP may have been exposed to GeoEval-2000 questions during its pre-train phase, leading to memorization of answers. This is supported by the observed performance decline from GeoEval-2000 to GeoEval-aug, which falls from 52.18% to 35.00%. Additionally, InstructBLIP tends to directly generate option letters (e.g., "A") for the GeoEval-hard subset without any reasoning process, resulting in an improbably high accuracy rate of 70.30% for this subset. Consequently, in our subsequent analysis and discussions, we have chosen to exclude the InstructBLIP model.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
86ed62a9-d802-4f6b-9aa1-18ca053f7046
# Geoeval: Benchmark For Evaluating Llms And Multi-Modal Models On Geometry Problem-Solving ## L Models Performances Across Different Data Sources Table 11 shows models performances on GeoEval- 2000 subset according to different original dataset. We can observe that WizardMath models still achieve the best accuracy scores on almost all datasets. GeoEval-2000 Data Sources Models MATH (Geometry) (%) GeometryQA (%) GeoQA+ (%) PGPS9K (%) UniGeo (%) MathQA (Geometry) (%) CodeGen2-16B 0.36 0.35 0.44 0.18 0.41 0.25 GPT-3.5 0.35 0.31 0.19 0.27 0.23 0.26 GPT-4 0.58 0.74 0.27 0.28 0.27 0.44 WizardMath-7B-V1.1 0.58 0.53 0.59 0.55 0.54 0.35 WizardMath-70B 0.54 0.58 0.62 0.54 0.57 0.35 llava-7B-V1.5 0.26 0.4 0.12 0.15 0.12 0.19 Qwen-VL 0.29 0.46 0.27 0.22 0.32 0.24 mPLUG-Owl2 0.27 n/a 0.29 0.46 0.27 0.0 InstructBLIP 0.0 n/a 0.59 0.48 0.57 0.0 GPT-4V 0.45 0.61 0.34 0.38 0.45 0.38
{ "creation_datetime": "2024-03-04", "file_name": "2402.10104v1.md", "file_path": "paper_data/2402.10104v1.md", "file_size": 62345, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }