doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
75c8bae3-0108-47f2-bc06-b67d9f4942d0
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | 8 | | | | | | | | | 6 | | | | | | | | | AP College Statistics | | | | | | | | | College Algebra | | | | | | | | | Differential Calculus | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e08864e6-19ea-4229-82fc-4b6ca9fc55a1
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | | | | | Differential Calculus | | | | | | | | | Integral Calculus | | | | | | | | | AP College | | | | | | | | | Calculus AB | | | | | | | | | AP College
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d05a53ec-4ace-47e2-8b6c-6e2d06386ddd
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | Calculus AB | | | | | | | | | AP College | | | | | | | | | Calculus BC | | | | | | | | | Multivariable | | | | | | | | | calculus | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4ba245e2-7ede-44c9-823f-3ec3ed37e466
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | calculus | | | | | | | | | Differential | | | | | | | | | equations | | | | | | | | | Linear algebra | | | | | | | | | 3 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c3024ae6-4478-4059-8bd4-e4b2bdb0c551
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | | | | | 3 | | | | | | | | | 3 | | | | | | | | | # Domain | 6 | 5 | 9 | 9 | | | | | # Concept | 63 | 51 | 88 | 72 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6940d928-a10d-4ba8-b870-997941eaae4b
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | # Concept | 63 | 51 | 88 | 72 | | | | | Domain | |----------------------------------------------------------------------| | 1st grade | | "Place value", "Addition and subtraction", "Measurement, data, and | | geometry", | | 2nd grade | | "Add and subtract within 20", "Place value", "Add and subtract | | within 100", "Add and subtract within 1,000", "Money and time", | | "Measurement", "Data", "Geometry",
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
48e932c9-985f-416b-94b1-de9ca6fc496c
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | "Add and subtract within 20", "Place value", "Add and subtract | | within 100", "Add and subtract within 1,000", "Money and time", | | "Measurement", "Data", "Geometry", | | 3rd grade | | "Intro to multiplication", "1-digit multiplication", "Addition, | | subtraction, and estimation", "Intro to division", "Understand | | fractions", "Equivalent fractions and comparing fractions", "More | | with multiplication and division", "Arithmetic patterns and problem | | solving", "Quadrilaterals", "Area", "Perimeter", "Time", | | "Measurement", "Represent and interpret data", | | 4th grade | | "Place value", "Addition, subtraction, and estimation", "Multiply by | | 1-digit numbers", "Multiply by 2-digit numbers", "Division", | | "Factors, multiples and patterns", "Equivalent fractions and | | comparing fractions", "Add
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5a9ab7de-2822-4387-9695-4908c603bc52
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | "Place value", "Addition, subtraction, and estimation", "Multiply by | | 1-digit numbers", "Multiply by 2-digit numbers", "Division", | | "Factors, multiples and patterns", "Equivalent fractions and | | comparing fractions", "Add and subtract fractions", "Multiply | | fractions", "Understand decimals", "Plane figures", "Measuring | | angles", "Area and perimeter", "Units of measurement", | | 5th grade | | "Decimal place value", "Add decimals", "Subtract decimals", "Add | | and subtract fractions", "Multi-digit multiplication and division", | | "Multiply fractions", "Divide fractions", "Multiply decimals", | | "Divide decimals", "Powers of ten", "Volume", "Coordinate plane", | | "Algebraic thinking", "Converting units of measure", "Line plots", | | "Properties of shapes", | | 6th grade | | "Ratios",
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d69c22f4-e14d-4972-bb7a-a6802a8709cf
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | "Properties of shapes", | | 6th grade | | "Ratios", "Arithmetic with rational numbers", "Rates and | | percentages", "Exponents and order of operations", "Negative | | numbers", "Variables & expressions", "Equations & inequalities", | | "Plane figures", |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
af404aa1-4b42-43e3-9d79-af6b70cfaaba
# Error messages for Formal proof (c) from Lean Prover: ## C Prompt Templates In Mustard C.1 Prompt Template For Proof Filtering Table 13 demonstrates the prompt template used in the proof-filtering stage in MUSTARD.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c7a47e63-f96d-4385-92e3-4f0842368586
# Error messages for Formal proof (c) from Lean Prover: ## C.2 Prompt Templates For Step-By-Step Generation Table 14 demonstrates the variation of prompt templates used in the proof-generation stage in MUS- TARD. In this variation, an LLM is conducted to perform (T1), (T2), and (T3) separately to generate the informal statement, informal solution, and formal solution. It is noted that to prompt the LLM to fulfill (T3), we assign the character of "a master in Lean" rather than the previous "a math expert" to obtain higher quality Lean proofs. | Domain | |----------------------------------------------------------------------| | 7th grade | | "Negative numbers: addition and subtraction", "Negative numbers: | | multiplication and division", "Fractions, decimals, & percentages", | | "Rates & proportional relationships", "Expressions, equations, & | | inequalities", "Geometry", "Statistics and probability", | | 8th grade | | "Numbers and operations", "Solving equations with one unknown", | | "Linear equations and functions", "Systems of equations", | | "Geometry", "Geometric transformations", "Data and modeling", | | Algebra basics
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fe0f2bec-9d36-4195-9ed3-4e12fe1b3587
# Error messages for Formal proof (c) from Lean Prover: ## C.2 Prompt Templates For Step-By-Step Generation | | "Numbers and operations", "Solving equations with one unknown", | | "Linear equations and functions", "Systems of equations", | | "Geometry", "Geometric transformations", "Data and modeling", | | Algebra basics | | "Foundations", "Algebraic expressions", "Linear equations and | | inequalities", "Graphing lines and slope", "Systems of equations", | | "Expressions with exponents", "Quadratics and polynomials", | | "Equations and geometry", | | Pre-algebra | | "Factors and multiples", "Patterns", "Ratios and rates", | | "Percentages", "Exponents intro and order of operations", "Variables | | & expressions", "Equations & inequalities introduction", "Percent & | | rational number word problems", "Proportional relationships", | | "One-step and two-step equations & inequalities", "Roots, exponents, | | & scientific notation", "Multi-step equations", "Two-variable | | equations", "Functions and linear models", "Systems of equations", | | Basic geometry
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7270ff6b-a253-4be1-9bcc-12bee2ce8a4d
# Error messages for Formal proof (c) from Lean Prover: ## C.2 Prompt Templates For Step-By-Step Generation | & expressions", "Equations & inequalities introduction", "Percent & | | rational number word problems", "Proportional relationships", | | "One-step and two-step equations & inequalities", "Roots, exponents, | | & scientific notation", "Multi-step equations", "Two-variable | | equations", "Functions and linear models", "Systems of equations", | | Basic geometry | | and measurement | | "Intro to area and perimeter", "Intro to mass and volume", | | "Measuring angles", "Plane figures", "Units of measurement", | | "Volume", "Coordinate plane", "Decomposing to find area", "3D | | figures", "Circles, cylinders, cones, and spheres", "Angle | | relationships", "Scale", "Triangle side lengths", "Geometric | | transformations", |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
47748540-121a-42be-b79a-664ce3f21b84
# Error messages for Formal proof (c) from Lean Prover: ## D More Statistic Results Of Mustardsauce D.1 Difficulty Of Mustardsauce By Number Of Correction Figure 5 demonstrates the proportions of data points that obtain valid proof after different numbers of corrections. Generally speaking, data points without correction are relatively less difficult for the LLMs, while those that require multiple corrections are challenging. Overall, theorem-proving problems are more challenging for LLMs to solve than the generated word problems. For data points with 2 seed concepts, more than 90% of the data can not pass the prover validation at the first generation. And almost 30% of them cost 2 correction steps to obtain valid proof. Similar observations are found in word problems with 2 seed concepts. It is suggested that the data subset with 2 seed concepts is challenging to the LLMs in general. In contrast, data with 1 seed concept are easier for LLMs. But there are still more than half data points that need proof improvements based on error messages from the theorem prover. Therefore, overall, MUSTARD generates valid data points in different difficulty levels, and the majority of the problems are challenging for the LLMs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
556ad2fa-5a7f-4899-9bfc-2e4cd83cdb90
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity We compute ROUGE-L (Lin, 2004) to check the diversity of generated informal statements and proofs. Specifically, given a data set, we perform 10 rounds of bootstrapping. In each round, we randomly sample 10 data points from the data set, each of which is paired with the remaining data points, and compute pair-wise ROUGE-L scores. The ROUGE-L score per round is obtained by averaging the pair-wise scores. The final ROUGE-L score is an average score over the bootstrapping. We compare the scores among the generation settings, and the results are shown in Figure 6. The results show that all settings have a ROUGE-L score beneath 0.25, which indicates a high diversity of the generated informal statements and informal proofs. All-at-once and step-by-step | Domain | |-----------------------------------------------------------------------| | Algebra 1 | | "Algebra foundations", "Solving equations & inequalities", | | "Working with units", "Linear equations & graphs", "Forms of linear | | equations", "Systems of equations", "Inequalities (systems & | | graphs)", "Functions", "Sequences", "Absolute value & piecewise | | functions", "Exponents & radicals", "Exponential growth & decay", | | "Quadratics: Multiplying & factoring", "Quadratic functions & | | equations", "Irrational numbers", "Creativity in algebra", | | Algebra 2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7b11fb9-012a-4c46-986c-4c4c4adb41c5
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | graphs)", "Functions", "Sequences", "Absolute value & piecewise | | functions", "Exponents & radicals", "Exponential growth & decay", | | "Quadratics: Multiplying & factoring", "Quadratic functions & | | equations", "Irrational numbers", "Creativity in algebra", | | Algebra 2 | | "Polynomial arithmetic", "Complex numbers", "Polynomial | | factorization", "Polynomial division", "Polynomial graphs", | | "Rational exponents and radicals", "Exponential models", | | "Logarithms", "Transformations of functions", "Equations", | | "Trigonometry", "Modeling", | | High school geometry | | "Performing transformations", "Transformation properties and | | proofs", "Congruence", "Similarity", "Right triangles & | | trigonometry", "Analytic geometry", "Conic sections", "Circles", | | "Solid geometry",
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
645737f4-c6b3-4e66-9641-7e6d4056eadb
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | "Performing transformations", "Transformation properties and | | proofs", "Congruence", "Similarity", "Right triangles & | | trigonometry", "Analytic geometry", "Conic sections", "Circles", | | "Solid geometry", | | Trigonometry | | "Right triangles & trigonometry", "Trigonometric functions", | | "Non-right triangles & trigonometry", "Trigonometric equations and | | identities", | | Statistics and probability | | "Analyzing categorical data", "Displaying and comparing | | quantitative data", "Summarizing quantitative data", "Modeling data | | distributions", "Exploring bivariate numerical data", "Study design", | | "Probability", "Counting, permutations, and combinations", | | "Random variables", "Sampling distributions", "Confidence | | intervals", "
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
48045146-26bb-43cd-a359-79c4ad95223f
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | "Analyzing categorical data", "Displaying and comparing | | quantitative data", "Summarizing quantitative data", "Modeling data | | distributions", "Exploring bivariate numerical data", "Study design", | | "Probability", "Counting, permutations, and combinations", | | "Random variables", "Sampling distributions", "Confidence | | intervals", "Significance tests (hypothesis testing)", "Two-sample | | inference for the difference between groups", "Inference for | | categorical data (chi-square tests)", "Advanced regression (inference | | and transforming)", "Analysis of variance (ANOVA)", | | High school statistics | | "Displaying a single quantitative variable", "Analyzing a single | | quantitative variable", "Two-way tables", "Scatterplots", "Study | | design", "Probability", "Probability distributions & expected value", | | Precalculus | | "Composite and inverse functions", "Trigonometry", "Complex | | numbers", "Rational functions", "Conic sections", "Vectors", | | "Matrices", "Probability and combinatorics", "Series", "Limits and | | continuity",
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b852f8c3-e4a9-4d54-9b66-03137edc8b44
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | "Composite and inverse functions", "Trigonometry", "Complex | | numbers", "Rational functions", "Conic sections", "Vectors", | | "Matrices", "Probability and combinatorics", "Series", "Limits and | | continuity", | | Calculus 1 | | "Limits and continuity", "Derivatives: definition and basic rules", | | "Derivatives: chain rule and other advanced topics", "Applications of | | derivatives", "Analyzing functions", "Integrals", "Differential | | equations", "Applications of integrals", | | Calculus 2 | | "Integrals review", "Integration techniques", "Differential | | equations", "Applications of integrals", "Parametric equations, polar | | coordinates, and vector-valued functions", "Series", | generation share similar data diversity. The ROUGE-L scores slightly increase over the educational levels. Generating higher-educational theorem proving data with 2 concepts step-by-step exceeds 0.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9f567a1d-83d9-49a3-8d79-11689a5e8ca5
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | "Integrals review", "Integration techniques", "Differential | | equations", "Applications of integrals", "Parametric equations, polar | | coordinates, and vector-valued functions", "Series", | generation share similar data diversity. The ROUGE-L scores slightly increase over the educational levels. Generating higher-educational theorem proving data with 2 concepts step-by-step exceeds 0.2 in ROUGE-L. Therefore, this setting is relatively challenging. We further investigate the diversity of formal statements and proofs. We collect all the occurrences of used tactics and lemmas, and Figure 7 shows the distributions. It is demonstrated that LLMs tend to use diverse lemmas to solve the problems. And LLMs memorize and understand multiple lemmas. The most commonly used lemmas in both theorem proving and word problems include | Domain | |-------------------------------------------------------------------------| | AP College Statistics | | "Exploring categorical data", "Exploring one-variable quantitative | | data: Displaying and describing", "Exploring one-variable | | quantitative data: Summary statistics", "Exploring one-variable | | quantitative data: Percentiles, z-scores, and the normal distribution", | | "Exploring two-variable quantitative data", "Collecting data", | | "Probability", "Random variables and probability distributions", | | "Sampling distributions", "Inference for categorical data:
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
51e79d9f-2ea6-42f6-a256-e93378965ae6
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity Exploring one-variable | | quantitative data: Summary statistics", "Exploring one-variable | | quantitative data: Percentiles, z-scores, and the normal distribution", | | "Exploring two-variable quantitative data", "Collecting data", | | "Probability", "Random variables and probability distributions", | | "Sampling distributions", "Inference for categorical data: | | Proportions", "Inference for quantitative data: Means", "Inference | | for categorical data: Chi-square", "Inference for quantitative data: | | slopes", "Prepare for the 2022 AP Statistics Exam", | | College Algebra | | "Linear equations and inequalities", "Graphs and forms of linear | | equations", "Functions", "Quadratics: Multiplying and factoring", | | "Quadratic functions and equations", "Complex numbers", | | "Exponents and radicals", "Rational expressions and equations", | | "Relating algebra and geometry", "Polynomial arithmetic", | | "Advanced function types", "Transformations of functions", | | "Rational exponents and radicals", "Logarithms", | | Differential Calculus
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fb6bea98-c32a-4eab-a946-ca4d84870957
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | "Relating algebra and geometry", "Polynomial arithmetic", | | "Advanced function types", "Transformations of functions", | | "Rational exponents and radicals", "Logarithms", | | Differential Calculus | | "Limits and continuity", "Derivatives: definition and basic rules", | | "Derivatives: chain rule and other advanced topics", "Applications of | | derivatives", "Analyzing functions", "Parametric equations, polar | | coordinates, and vector-va", | | Integral Calculus | | "Integrals", "Differential equations", "Applications of integrals", | | "Parametric equations, polar coordinates, and vector-valued | | functions", "Series", | | AP College Calculus AB | | "Limits and continuity", "
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ef8119fd-4dcf-468e-b624-f40464d45faf
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | functions", "Series", | | AP College Calculus AB | | "Limits and continuity", "Differentiation: definition and basic | | derivative rules", "Differentiation: composite, implicit, and inverse | | functions", "Contextual applications of differentiation", "Applying | | derivatives to analyze functions", "Integration and accumulation of | | change", "Differential equations", "Applications of integration", "AP | | Calculus AB solved free response questions from past exams", "AP | | Calculus AB Standards mappings", | | AP College Calculus BC | | "Limits and continuity", "Differentiation: definition and basic | | derivative rules", "Differentiation: composite, implicit, and inverse | | functions", "Contextual applications of differentiation", "Applying | | derivatives to analyze functions", "Integration and accumulation of | | change", "Differential equations", "Applications of integration", | | "Parametric equations, polar coordinates, and vector-valued | | functions", "Infinite sequences and series", "AP Calculus BC solved | | exams", "AP Calculus BC Standards mappings",
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
534b9e44-e0f7-4e56-88e7-8f016bc19f2a
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity inverse | | functions", "Contextual applications of differentiation", "Applying | | derivatives to analyze functions", "Integration and accumulation of | | change", "Differential equations", "Applications of integration", | | "Parametric equations, polar coordinates, and vector-valued | | functions", "Infinite sequences and series", "AP Calculus BC solved | | exams", "AP Calculus BC Standards mappings", | | Multivariable calculus | | "Thinking about multivariable functions", "Derivatives of | | multivariable functions", "Applications of multivariable derivatives", | | "Integrating multivariable functions", "Green's, Stokes', and the | | divergence theorems", | | Differential equations | | "First order differential equations", "Second order linear equations", | | "Laplace transform", | | Linear algebra
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
64f76288-02d7-4feb-b992-b241c726e2e6
# Error messages for Formal proof (c) from Lean Prover: ## D.2 Data Diversity | | "First order differential equations", "Second order linear equations", | | "Laplace transform", | | Linear algebra | | "Vectors and spaces", "Matrix transformations", "Alternate | | coordinate systems (bases)", | of as true, mul comm, and nat.mul comm. of at true is often used in automated proofs for complex propositions to first prove their decidability. mul comm and nat.mul comm are about
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a068c56c-ffc6-4b83-acec-7cc18f2a0e02
# Error messages for Formal proof (c) from Lean Prover: ## Prompt Template For Proof Filtering In the following, you are given a ''Problem'', a pair of corresponding ''Informal proof'' and ''Formal proof in Lean 3'', along with error messages from a Lean Prover corresponding to the ''Formal proof in Lean 3''. Now please carefully modify the ''Formal proof in Lean 3'' section so that it passes the Lean Prover without error. You should write the modified complete proof in your response.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b2670262-8903-4ab0-bb3d-3bc1637172a6
# Problem: <generated problem> # Informal proof: <generated informal proof>
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
914c4aeb-45c8-4fac-a04a-6cc6aae071d7
# Formal proof (1) in Lean 3: '''lean line 1 <code> line 2 <code> line 3 <code> ... '''
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c151691b-31ab-44d7-a3c9-80181b551b41
# Error messages for Formal proof (1) from Lean Prover: <error messages> ...
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1b00bb8d-9a8e-4ec4-ae1c-61e0388b1bca
# Formal proof (k) in Lean 3: '''lean line 1 <code> line 2 <code> line 3 <code> ... '''
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2e036d8f-72b9-43d9-b7c3-eca0b162a860
# Error messages for Formal proof (k) from Lean Prover: <error messages> the multiplicative commutative law and are often used for expression simplification. Similarly, the frequently used neg pos, pow two, nat.div eq of eq mul right and so forth are basic lemmas for proving advanced properties. Similar observations are found in tactics. Therefore, LLMs are good at breaking problems into basic steps and using basic lemmas to complete complex proving.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
275f6ff6-5266-4f29-a81e-5a6a5ae24321
# Error messages for Formal proof (k) from Lean Prover: ## E Case Study F Implementation Details Of Downstream Task F.1 Datasets GSM8K (Cobbe et al., 2021) GSM8K consists of 8.5K elementary mathematics word problems that require 2 to 8 arithmetic operations to arrive at the final answer. The dataset comprises 7.5K training questions and 1K test questions. Inspired by Kojima et al. (2022), during inference, we use appropriate prompts and examples to prompt the model for zero-shot and few-shot reasoning. The used prompts are demonstrated in Table 27. Mathlib Mathlib6 is a community-maintained library designed for the Lean theorem prover. It encompasses both programming tools and mathematical content, along with tactics that leverage these tools to facilitate mathematical development. The version of Mathlib we use is consistent with Wang et al. (2023a). The lengths of the training, test, and validation sets were 36,960, 1,621, and 1,580, respectively. miniF2F (Zheng et al., 2022) MiniF2F serves as a formal mathematics benchmark, which has been translated to work with multiple formal systems. It encompasses exercise statements from olympiads like AMC, AIME, and IMO, in addition to content from high-school and undergraduate mathematics courses. The MiniF2F test split contains 244 formal Olympiad-level mathematics problem statements. We use the auto-regression method to construct the training corpus like Polu & Sutskever (2020); Han et al. (2022); Wang et al. (2023a). We perform an evaluation using best-first search with a number of expansions per proof search of d = 8 during inference.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
94c172d8-7a16-4a3f-ac26-55018ab01969
# Error messages for Formal proof (k) from Lean Prover: ## F.2 Models GPT2-large (Radford et al., 2019) The GPT2-large model is a transformer language model, following the decoder-only architecture introduced by Vaswani et al. (2017). The model boasts an impressive 774 million parameters, 36 layers, 20 attention heads, and a hidden dimension of 1,280. Additionally, it employs a tokenizer featuring a vocabulary size of 50,400. The model is pre-trained on Github python codes and the arXiv library. Llama 2-7B (Touvron et al., 2023) Llama 2 is a language model that employs an auto-regressive transformer architecture, pre-trained on open-source corpus. The model utilizes both supervised fine-tuning and reinforcement learning with human feedback techniques to align with human preferences. The 7B model is configured with 32 layers, 32 attention heads, and a hidden dimension of 4,096.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
56ae6ca9-1d32-4ef8-b976-5df12bcfb4b4
# Error messages for Formal proof (k) from Lean Prover: ## F.3 Implementation Details We employ LoRA (Hu et al., 2021) for fine-tuning the pre-trained models on MUSTARDSAUCE, where the trainable parameters of GPT2-large and Llama 2-7B constitute 19% and 6%, respectively. The training is conducted with a maximum of 10 epochs using a batch size of 16 and a warm-up step of 1,000, with a maximum learning rate of 1e-4 and a minimum learning rate of 5e-6. The best checkpoint is selected based on the minimum perplexity of the validation split.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a7370395-869a-4e93-a89b-ef7f7836fa06
# Error messages for Formal proof (k) from Lean Prover: ## G More Experimental Results Table 28 demonstrates the compared results on GSM8K and MATH between Llama 2-7B and Llama 2-70B. Llama 2-70B fine-tuned with MUSTARDSAUCE-valid consistently outperforms the model fine-tuned with MUSTARDSAUCE-random by 8.33% in the zero-shot manner and 5.30% in the few-shot manner. It also surpasses the model fine-tuned with the invalid subset and the entire generated dataset. The results also suggest the effectiveness of the framework with a larger fine-tuned LM.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ac3801d9-2b70-422c-bc4d-cfb78d80697b
# Error messages for Formal proof (k) from Lean Prover: ## H Data Contamination Check We check cross-contamination between MUSTARDSAUCE and the evaluation datasets inspired by OpenAI (2023). However, instead of using a substring match that may result in false negatives and false positives, we compute cosine similarities based on text-embedding-ada-0027, and then inspect the nearest data points in the paired datasets. The automated theorem proving (ATP) dataset miniF2F only contains formal statements and proofs, while the math word problem (MWP) dataset GSM8K only contains informal statements and proofs. For a more detailed inspection, we check the corresponding fractions in MUSTARDSAUCE. Tables 29, 32, 31, and 30 demonstrate the inspected cases. The nearest data points are dissimilar. Therefore, we exclude data contamination of the generated MUSTARDSAUCE according to these observations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3a38290d-b715-4dd8-80d3-8b0827baa43e
# Error messages for Formal proof (k) from Lean Prover: ## Prompt Templates For Step-By-Step Generation You are a math expert. Now please come up with a math problem according to the following requirements. The math problem should contain a question part (indicated by ''Problem: ''), a corresponding solution in natural language (indicated by ''Informal proof:''), and a translated formal solution in Lean 3 (indicated by ''Formal proof in Lean 3:''). Please note that the informal proof and the formal proof need to be identical. Please create a [QUESTION TYPE] in the level of [EDUCATIONAL LEVEL] based on the following knowledge point(s): [CONCEPT] in [DOMAIN]; [CONCEPT] in [DOMAIN]. (T1) Please first write the question part regardless of the other parts. You must write the following format, filling in the ''# Problem: '' section, and leaving the other two sections empty.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2ccfc709-fbdd-48ef-b05e-04e42cf5a571
# Problem: ... # Informal proof: ... # Formal proof in Lean 3: ... You are a math expert. Now please come up with a math problem according to the following requirements. The math problem should contain a question part (indicated by ''Problem: ''), a corresponding solution in natural language (indicated by ''Informal proof:''), and a translated formal solution in Lean 3 (indicated by ''Formal proof in Lean 3:''). Please note that the informal proof and the formal proof need to be identical. Please create a [QUESTION TYPE] in the level of [EDUCATIONAL LEVEL] based on the following knowledge point(s): [CONCEPT] in [DOMAIN]; [CONCEPT] in [DOMAIN]. (T2) Please then write the corresponding solution in natural language (indicated by ''Informal proof:'') given the ''# Problem: '', filling in the ''# Informal proof: '' section, and leaving the other section empty. <generated problem>
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dc8bbeff-ac0c-4f50-91b9-66720a60f90d
# Informal proof: ...
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
993b061d-86d7-42d5-9892-366a828cffb0
# Formal proof in Lean 3: ... You are a master in Lean. Now please come up with a math problem according to the following requirements. The math problem should contain a question part (indicated by ''Problem: ''), a corresponding solution in natural language (indicated by ''Informal proof:''), and a translated formal solution in Lean 3 (indicated by ''Formal proof in Lean 3:''). Please note that the informal proof and the formal proof need to be identical. Please create a [QUESTION TYPE] in the level of [EDUCATIONAL LEVEL] based on the following knowledge point(s): [CONCEPT] in [DOMAIN]; [CONCEPT] in [DOMAIN]. (T3) Please translate the ''# Informal proof:'' section into Lean 3 and fill in the ''# Formal proof in Lean 3: '' section.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fe0981d5-fba9-4359-b90c-45387679f334
# Problem: <generated problem> # Informal proof: <generated informal proof> # Formal proof in Lean 3: ... | Question Type: Word Problem. | Educational Level: Elementary School. | |--------------------------------|-----------------------------------------| | Concept(s): | | | Division | in 4th grade. | Comments: MUSTARD accurately counts the number of people and correctly performs division. Informal Statement John and his 3 friends are on a trip. They bought a huge box of chocolates containing 240 pieces. They decided to split the chocolates equally among themselves. How many chocolates will each one of them get? Informal Proof Since there are 4 people in total and they have 240 chocolates, we can find out how many chocolates each person will get by dividing the total number of chocolates by the total number of people. So, 240 divided by 4 equals 60. Therefore, each person will get 60 chocolates. Formal Statement and Proof def divide (x y : N) : N := x / y #reduce divide 240 4 -- The output should be 60, which means each person will get 60 chocolates. | Question Type: Theorem Proving. | Educational Level: Elementary School. | |-----------------------------------|-----------------------------------------| | Concept(s): | | | Quadrilaterals | in 3rd grade.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0cdd0704-6c8d-4f4a-97c5-e6b0072e2a06
# Problem: | Educational Level: Elementary School. | |-----------------------------------|-----------------------------------------| | Concept(s): | | | Quadrilaterals | in 3rd grade. | Comments: MUSTARD believes that a quadrilateral with four equal sides is a square. It treats a necessary condition as a sufficient condition.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bfb7a5cd-94bf-4d59-aea8-fb0caeeead08
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. Formal Statement and Proof import data.real.basic def is square quad (AB BC CD DA : R) : Prop := AB = BC ∧ BC = CD ∧ CD = DA def ABCD is square : ∀ (AB BC CD DA : R), AB = 5 ∧ BC = 5 ∧ CD = 5 ∧ DA = 5 → is square quad AB BC CD DA := begin intros AB BC CD DA h, cases h with h1 h2, cases h2 with h3 h4, cases h4 with h5 h6, split, { rw h1, exact h3 }, split, { rw h3, exact h5 }, { rw h5, exact h6 }, end Question Type: Word Problem. Educational Level: Elementary School. k=2. Successful case. Concept(s): Addition and subtraction in 1st grade; Measuring angles in 4th grade. Comments: MUSTARD creatively combines the two concepts and constructs a reasonable question. Informal Statement John has a favorite toy car that he likes to move around a circular track. He starts moving the car from a certain point and stops after the car has moved 90 degrees. Then he moves the car again but this time in the opposite direction, stopping when the car has moved 45 degrees. What is the net degree movement of John's toy car? Informal Proof Initially, John moves his car 90 degrees. Then he moves the car in the opposite direction 45 degrees. Since the car is moving in the opposite direction, we consider this as -45 degrees. To find the net movement, we add the two movements together. So, 90 + (-45) equals 45. Therefore, the net degree movement of John's toy car is 45 degrees. Formal Statement and Proof import data.int.basic open int -- define the initial movement def initial movement : Z := 90 -- define the backward movement def backward movement : Z := -45 -- define the function to calculate the net movement def net movement (initial backward : Z) : Z := initial + backward -- proof that the net movement is 45 degrees example : net movement
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6b971772-144d-469a-a8ec-d7c4750718e7
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. , 90 + (-45) equals 45. Therefore, the net degree movement of John's toy car is 45 degrees. Formal Statement and Proof import data.int.basic open int -- define the initial movement def initial movement : Z := 90 -- define the backward movement def backward movement : Z := -45 -- define the function to calculate the net movement def net movement (initial backward : Z) : Z := initial + backward -- proof that the net movement is 45 degrees example : net movement initial movement backward movement = 45 := begin unfold net movement, unfold initial movement, unfold backward movement, exact dec trivial, end Question Type: Theorem Proving. Educational Level: Middle School. k=2. Successful case. Concept(s): Geometry in 8th grade; Algebraic expressions in Algebra basics. Comments: MUSTARD understands polynomials and writes a valid proof in Lean. Informal Statement Given a rectangle ABCD where AB is x + 5 and AD is 2x - 3. Prove that the area of the rectangle ABCD is (2x2 + 7x − 15) square units. Informal Proof In a rectangle, the area is calculated by multiplying the length and the width. Here, the length is represented by the expression x + 5 and the width is represented by the expression 2x − 3. By multiplying these two expressions together, we get: (x + 5) ∗ (2x − 3) = 2x2 − 3x + 10x − 15 = 2x2 + 7x − 15 Therefore, the area of the rectangle ABCD is (2x2 + 7x − 15) square units. Formal Statement and Proof import algebra.ring.basic variables (x : Z) def rectangle area : (x + 5) * (2 * x - 3) = 2 * (x * x) + 7 * x - 15 := begin have h1 : (x + 5) * (2 * x - 3) = 2 * (x * x) - 3 * x + 10 * x - 15, by ring, have
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fd66e304-9bbb-4778-b695-d38ea7fac9d7
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. 7x − 15 Therefore, the area of the rectangle ABCD is (2x2 + 7x − 15) square units. Formal Statement and Proof import algebra.ring.basic variables (x : Z) def rectangle area : (x + 5) * (2 * x - 3) = 2 * (x * x) + 7 * x - 15 := begin have h1 : (x + 5) * (2 * x - 3) = 2 * (x * x) - 3 * x + 10 * x - 15, by ring, have h2 : 2 * (x * x) - 3 * x + 10 * x - 15 = 2 * (x * x) + 7 * x - 15, by simp, rw h1, exact h2, end Question Type: Theorem Proving. Educational Level: High School. k=1. Failed case. Concept(s): Displaying and comparing quantitative data in Statistics and probability. Comments: MUSTARD gives the definition of median in the formal proof. Informal Statement Given two sets of data A = {5, 7, 10, 12, 15} and B = {6, 8, 11, 14, 16}, we are to find the median of the two sets and prove that the median of set A is less than the median of set B. Informal Proof To find the median of a given data set, we need to sort the data in ascending order and then find the middle value. In the case of set A and set B, both are already sorted in ascending order. For set A, the median is 10 and for set B, the median is 11. Therefore, it is clear that the median of set A is less than the median of set B. Formal Statement and Proof import data.set.finite import data.set.lattice def A : finset N := 5, 7, 10, 12, 15.to finset def B : finset N := 6, 8, 11, 14, 16.to finset def median (A : finset N) : N := if A.card % 2 = 1 **then** (A.sort (≤)).get ((A.card
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
36dacc4e-1c72-47f0-b512-4846a3358aca
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. clear that the median of set A is less than the median of set B. Formal Statement and Proof import data.set.finite import data.set.lattice def A : finset N := 5, 7, 10, 12, 15.to finset def B : finset N := 6, 8, 11, 14, 16.to finset def median (A : finset N) : N := if A.card % 2 = 1 **then** (A.sort (≤)).get ((A.card - 1) / 2) else (A.sort (≤)).get (A.card / 2 - 1) / 2 + (A.sort (≤)).get (A.card / 2) / 2 lemma median A less than B : (median A) < (median B) := begin simp [median, A, B, finset.sort, finset.get], norm num, end | Question Type: Word Problem. | Educational Level: High School. | k=2. | Failed case. | |--------------------------------|-----------------------------------|-------------------------------|--------------------------------| | Concept(s): | | | | | Integrals | in Calculus 1; | Summarizing quantitative data | in Statistics and probability. | Comments: The informal statement does not cover the situation of leap
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9fce2b17-9abe-4a5f-a912-08a0fc9d0e77
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. | | | Integrals | in Calculus 1; | Summarizing quantitative data | in Statistics and probability. | Comments: The informal statement does not cover the situation of leap year. The solution does not give the final answer. Informal Statement A company produced a new model of a car and started selling it on January 1. The number of cars sold t days after January 1 is given by the function f(t) = 2t2 + 3t + 1. Use integrals to find the total number of cars sold from January 1 to March 31. Then calculate the average number of cars sold per day during this period. Informal Proof The total number of cars sold from January 1 to March 31 can be found by integrating the function f(t) from t = 0 (January 1) to t = 90 (March 31), as each day corresponds to one unit of t. 3 × t3 + 3 Performing the integration, we get � (2t2 + 3t + 1) dt from 0 to 90, which equals [ 2 2 × t2 + t] from 0 to 90. Substituting t = 90 into the integrated function and subtracting the value at t = 0, we get the total number of cars sold. The average number of cars sold per day is then the total number of cars sold divided by 90 (the number of days from January 1 to March 31). Formal Statement and Proof import data.real.basic open real def f := λ(t : R), 2*t2 + 3*t + 1 lemma f value at 90 : f 90 = 2*(90)2 + 3*(90) + 1 := begin simp [f], end Question Type: Theorem Proving. Educational Level: Higher Education. k=1. Successful case. Concept(s): Alternate coordinate systems (bases) in
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1fc14309-b271-4475-9212-4826b72af632
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. the number of days from January 1 to March 31). Formal Statement and Proof import data.real.basic open real def f := λ(t : R), 2*t2 + 3*t + 1 lemma f value at 90 : f 90 = 2*(90)2 + 3*(90) + 1 := begin simp [f], end Question Type: Theorem Proving. Educational Level: Higher Education. k=1. Successful case. Concept(s): Alternate coordinate systems (bases) in Linear algebra. Comments: MUSTARD correctly applies the lemma. All the concepts of linear algebra are found. Informal Statement Let V be a vector space, and let B1 = v1, v2, ..., vn and B2 = w1, w2, ..., wn be two bases for V . Prove that there exists an invertible linear map T : V → V such that T(vi) = wi for each i in 1, 2, ..., n. Informal Proof The map T : V → V defined by T(vi) = wi for each i is linear: indeed, for any vectors v, v′ in B1 and any scalars c, c′, we have T(cv + c′v′) = cw + c′w′ where w = T(v) and w′ = T(v′). This follows from the linearity properties of vector addition and scalar multiplication in the vector space V . Formal Statement and Proof import linear algebra.basis variables V : **Type*** [add comm group V] [module R V] variables v w : V variables B1 B2 : set V variables (hB1 : is basis R B1) (hB2 : is basis R B2) lemma exists invertible linear map : ∃ (T : V →[R] V) (hT : is linear map R T) (hinv : function.bijective T), ∀ (v ∈ B1), T v ∈ B2 := begin use [hB2.constr hB1, hB2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4e073698-5ff1-4917-b227-42b4c9a917a4
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. v w : V variables B1 B2 : set V variables (hB1 : is basis R B1) (hB2 : is basis R B2) lemma exists invertible linear map : ∃ (T : V →[R] V) (hT : is linear map R T) (hinv : function.bijective T), ∀ (v ∈ B1), T v ∈ B2 := begin use [hB2.constr hB1, hB2.is linear hB1, hB2.bijective hB1], intros v hv, apply hB2.mem span, rw hB1.repr self hv, exact λ , hB2.smul , end Question Type: Word Problem. Educational Level: Higher Education. k=1. Successful case. Concept(s): Series in Integral Calculus. Comments: MUSTARD is proficient in number series and is able to perform detailed proof by integration. n equals ln(2). Informal Statement Prove that the integral of the series � (from n = 1 to infinity) of −1(n+1) n as the alternating harmonic series, which is known to converge to ln(2). Informal Proof We start by recognizing the series � (from n = 1 to infinity) of −1(n+1) Formal Statement and Proof import data.real.basic import analysis.special functions.pow open real begin lemma integral of series : � x in 0..1, (ln (1 + x)) = �′ (n : N), ((-1)ˆ(n+1) / (n + 1)) := have h der : ∀ x ∈ set.Icc (0 : R) 1, deriv (λ x, (ln (1 + x))) x = 1 / (1 + x), { intros x hx, have h pos : 0 < 1 + x := by nlinarith,
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d3f90451-0b78-4d59-ac5a-e07139c554f7
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. lemma integral of series : � x in 0..1, (ln (1 + x)) = �′ (n : N), ((-1)ˆ(n+1) / (n + 1)) := have h der : ∀ x ∈ set.Icc (0 : R) 1, deriv (λ x, (ln (1 + x))) x = 1 / (1 + x), { intros x hx, have h pos : 0 < 1 + x := by nlinarith, exact deriv ln one add h pos }, have h int : interval integral (λ x, 1 / (1 + x)) 0 1 volume = �′ (n : N), ((-1)ˆ(n+1) / (n + 1)), { have h frac : ∀ (n : N), � x in 0..1, xˆn = 1 / (n + 1), { intro n, calc � x in 0..1, xˆn = [xˆ(n+1) / (n+1)] | 0..1 : integral pow n ... = 1 / (n + 1) : by { rw integral interval, simp } }, rw [interval integral.integral of le, h frac], simp, linarith }, have h eq : � x in 0..1, (ln (1 + x)) = � x in 0..1, 1 / (1 + x), { congr, ext, exact h der x (set.mem Icc.mpr 〈by nlinarith, by nlinarith〉) }, rw [h eq, h int], end | Question Type: Theorem Proving. | Educational Level: Higher Education. | k=2. | Failed case. | |-----------------------------------|----------------------------------------|-----------------------|----------------| | Concept(s):
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dd4b1d09-acf8-44be-85b1-7184328e3897
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. pr 〈by nlinarith, by nlinarith〉) }, rw [h eq, h int], end | Question Type: Theorem Proving. | Educational Level: Higher Education. | k=2. | Failed case. | |-----------------------------------|----------------------------------------|-----------------------|----------------| | Concept(s): | | | | | Second order linear equations | in Differential equations; | Limits and continuity | in AP College | | Calculus BC. | | | | Comments: MUSTARD does not understand second-order linear equations and continuity. Informal Statement Given a second order linear differential equation of the form y′′+p(x)y′+q(x)y = 0, where p(x) and q(x) are continuous on the interval [*a, b*], prove that if there exist two solutions y1 and y2 that are linearly independent, then any solution y of the differential equation can be expressed as y = c1 ∗ y1 + c2 ∗ y2, where c1 and c2 are constants. Informal Proof The general solution to a homogeneous second order linear differential equation is given by the superposition of two linearly independent solutions
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5dc9266d-259f-4d7e-8442-f5e71f80656d
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. the form y′′+p(x)y′+q(x)y = 0, where p(x) and q(x) are continuous on the interval [*a, b*], prove that if there exist two solutions y1 and y2 that are linearly independent, then any solution y of the differential equation can be expressed as y = c1 ∗ y1 + c2 ∗ y2, where c1 and c2 are constants. Informal Proof The general solution to a homogeneous second order linear differential equation is given by the superposition of two linearly independent solutions. This is a consequence of the linearity of the differential operator. Suppose we have a second order linear differential equation, and y1 and y2 are two solutions that are linearly independent. Then, any solution y of the differential equation can be expressed as a linear combination of y1 and y2. This means there exist constants c1 and c2 such that y = c1 ∗ y1 + c2 ∗ y2. The reason is that any linear combination of solutions to a linear differential equation is also a solution to the differential equation. This follows from the properties of the derivative, which allow us to distribute the derivative over addition and to factor out constants. Formal Statement and Proof import data.real.basic variables α : Type* [field α] /-- Placeholder for the differential equation -/ def diff eq (p q y : α → α) (x : α) : Prop := y x = 0 -- This is just a placeholder. A real differential equation would have a more complex form. /-- Theorem: Given a second order linear differential equation of the form y'' + p(x)y' + q(x)y = 0, where p(x) and q(x) are continuous on the interval [a, b], if there exist two solutions y1 and y2 that are linearly independent, then any solution y of the differential equation can be expressed as y = c1*y1 + c2*y2, where c1 and c2 are constants. -/ lemma second order linear differential eq (p q y1 y2 y : α → α) (c1 c2 : α) : (�
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
35e1a846-935a-4dd4-a0d2-51c9928e1066
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. + q(x)y = 0, where p(x) and q(x) are continuous on the interval [a, b], if there exist two solutions y1 and y2 that are linearly independent, then any solution y of the differential equation can be expressed as y = c1*y1 + c2*y2, where c1 and c2 are constants. -/ lemma second order linear differential eq (p q y1 y2 y : α → α) (c1 c2 : α) : (∃ x, diff eq p q y1 x ∧ diff eq p q y2 x) → (∀ x, diff eq p q y x) → (∀ x, y x = c1 * y1 x + c2 * y2 x) := begin intros h sol h y x, -- This is a placeholder. A real proof would show that y x can be written as a linear combination -- of y1 x and y2 x using the properties of the differential equation, the assumption that y1 and y2 -- are solutions, and the linearity of the differential operator. sorry end | Question Type: Word Problem. | Educational Level: Higher Education. | k=2. | Successful case. | |-----------------------------------------|----------------------------------------|-----------------|---------------------------| | Concept(s): | | | | | Matrix transformations | in Linearalgebra;
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ee549379-9e74-4255-9934-db93f6f4aff9
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. | | | | Matrix transformations | in Linearalgebra; | Collecting data | in AP College Statistics. | | Comments: | | | | | M | | | | | USTARD | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b5eab4c2-0fd4-4a0b-89a5-e4556e784399
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. | | USTARD | | | | | performs correct matrix transformation. | | | | Informal Statement Consider two matrices A = [1, 2; 3, 4] and B = [5, 6; 7, 8] which represent two different data collection methods applied to a survey of a population. We want to find the collective effect of these two data collection methods when applied simultaneously. In other words, find the matrix C which is the result of the matrix transformation of A by B. Informal Proof Matrix multiplication is a form of transformation of one matrix by another. In this case, we want to find the matrix C that is a result of the transformation of A by B. Matrix multiplication is done by taking the dot product of the rows of the first matrix with the columns of the second matrix. Therefore, C[0, 0] = (A[0, 0] ∗ B[0, 0]) + (A[0, 1] ∗ B[1, 0]) = (1 ∗ 5) + (2 ∗ 7) = 19, C[0, 1] = (A[0, 0] ∗ B[0, 1]) + (A[0, 1] ∗ B[1, 1]) = (1 ∗ 6) + (2 ∗ 8) = 22, C[1, 0] = (A[1, 0] ∗ B[0, 0])
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
38ffa9d2-2e10-4421-98ac-45da4ceb533d
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. = (A[0, 0] ∗ B[0, 0]) + (A[0, 1] ∗ B[1, 0]) = (1 ∗ 5) + (2 ∗ 7) = 19, C[0, 1] = (A[0, 0] ∗ B[0, 1]) + (A[0, 1] ∗ B[1, 1]) = (1 ∗ 6) + (2 ∗ 8) = 22, C[1, 0] = (A[1, 0] ∗ B[0, 0]) + (A[1, 1] ∗ B[1, 0]) = (3 ∗ 5) + (4 ∗ 7) = 43, and C[1, 1] = (A[1, 0] ∗ B[0, 1]) + (A[1, 1] ∗ B[1, 1]) = (3 ∗ 6) + (4 ∗ 8) = 50. So, the matrix C = [19, 22; 43, 50]. Formal Statement and Proof import data.matrix.basic def A : matrix (fin 2) (fin 2) N := ![![1, 2], ![3, 4]] def B : matrix (fin 2) (fin 2) N := ![![5, 6], ![7, 8]] def C : matrix (fin 2) (fin 2) N := A.mul B #eval C -- Output: ![![19, 22], ![43, 50]] | Question Type: Theorem Proving. | Educational Level: Higher Education. | k=2. | |-----------------------------------------------|----------------------------------------|------------------------| | Concept(s): | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5dbeffee-f4e4-4d84-a0f5-134b654db9b8
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. : Higher Education. | k=2. | |-----------------------------------------------|----------------------------------------|------------------------| | Concept(s): | | | | Green's, Stokes', and the divergence theorems | in Multivariable calculus; | Differential equations | | in Integral Calculus. | | | | Comments: | | | | M | | | | USTARD
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
02e8e3d5-a034-40a5-a8e6-69bce7019868
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. | | M | | | | USTARD | | | | does not finish the formal proof with | | | | sorry | | | | . | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
50110965-4d80-4cfa-90ce-c6c58a85eb46
# Problem: ## Informal Statement Given A Quadrilateral Abcd Where Ab = Bc = Cd = Da = 5 Units, Prove That This Quadrilateral Is A Square. Informal Proof The Given Quadrilateral Abcd Has All Its Sides Equal. This Is A Property Of A Square. Hence, Abcd Is A Square. | | | . | | | Informal Statement Let F be a continuously differentiable vector field on R3. Let S be the boundary of the unit ball B in Re, oriented by the outward normal. Let n denote the outward unit normal on S. Prove that � S div F dS = � SF - n dS, where "div" denote the divergence of F, "•" denotes the dot product, and " � ", " � " denote the surface integral and line integral respectively.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a26a22c5-ad0a-4622-a2c0-7589d7e41d6d
# Problem: ## Informal Proof This Is An Application Of The Divergence Theorem, Also Known As Gauss'S Theorem. The divergence theorem states that the surface integral of a vector field over a surface S is equal to the volume integral of the divergence of the field over the volume V enclosed by S. In mathematical terms, � SF - n dS = � V div F dV . In our case, the volume V is the unit ball B in R3, and the surface S is the boundary of B. The divergence theorem thus gives us � SF - n dS = � B div F dV . However, since B is the unit ball, its volume is 1, so the volume integral over B is just the average value of div F over B. Therefore, � B div F dV = � S div F dS . Putting everything together, we obtain � SF - n dS = � S div F dS , as required. Formal Statement and Proof import analysis.calculus.fderiv import measure theory.integration import analysis.normed space.inner product import data.real.basic import topology.instances.real import analysis.special functions.trigonometric open locale classical
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6b804ba2-4a32-47c2-af7d-aa629441ecce
# Problem: ## Noncomputable Theory /- Let's define R3 as Euclidean space -/ def R3 := euclidean space R (fin 3) /- Define the ball and boundary -/ def B := metric.ball (0:R3) 1 def S := metric.sphere (0:R3) 1 /- Assume F is a continuously differentiable vector field on R3 -/ variables (F : R3 → R3) (hF : ∀ x, differentiable at R F x) /- Define the divergence of F -/ def divF := λ x, (fderiv R F x).trace /- Define the outward unit normal on S -/ def n := λ x:R3, x /- Now we can state the theorem -/ def div eq surface integral : � x in B, divF F x = � x in S, inner (F x) (n x) := begin -- This is where the main proof would go, using the divergence theorem and other facts about integration. sorry
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f01eff2d-043a-4907-9cae-e86a9a4d20e3
# Problem: ## End | Question Type: Theorem Proving. | Educational Level: Higher Education. | |-----------------------------------|----------------------------------------| | Concept(s): | | | Matrix transformations | in Linear algebra. | Comments: The generated informal statement has factual errors. Informal Statement Let's consider a function f : R → R that is differentiable at a point x0. Let's also consider a 2 × 2 matrix A. We define a new function g(x) = A ∗ f(x). Prove that g is also differentiable at x0, and find the derivative of g at x0. Informal Proof Since f is differentiable at x0, the limit of the difference quotient exists at this point. Denote this limit as f ′(x0). We know that matrix multiplication is a linear operation. So we can move the limit inside the matrix multiplication to find the derivative of g at x0. So, g′(x0) = A ∗ f ′(x0). Formal Statement and Proof import data.matrix.basic import analysis.calculus.deriv variables f : R → R A : matrix (fin 2) (fin 2) R x0 : R variable (h : differentiable at R f x0) include h lemma matrix times differentiable function : differentiable at R (λ x, A · (λ i, f x) i) x0 := begin apply matrix.has fderiv at diff at, exact h.has deriv at, end lemma derivative of matrix times function : deriv (λ x, A · (λ i, f x) i) x0 = A · (λ i, deriv f x0) i := begin apply has deriv at.deriv, ex
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ceb38589-97f9-46f3-adb5-1f242aedb088
# Problem: ## End include h lemma matrix times differentiable function : differentiable at R (λ x, A · (λ i, f x) i) x0 := begin apply matrix.has fderiv at diff at, exact h.has deriv at, end lemma derivative of matrix times function : deriv (λ x, A · (λ i, f x) i) x0 = A · (λ i, deriv f x0) i := begin apply has deriv at.deriv, exact matrix times differentiable function h, end
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
60b81a8b-85b5-4336-b416-7e15f7c51a95
# Problem: ## Zero Shot Prompt Template For Math Word Problem Inference You are an expert in math. Answer the following math word problem. Question: <question> Answer: Let's think step by step.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
84cb9651-b40e-4320-aee8-23fa3bb2c5d2
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference You are an expert in math. Answer the following math word problem. Question: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? Answer: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6. Question: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? Answer: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5. Question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? Answer: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39. Question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? Answer: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8. Question: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? Answer: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. Question: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? Answer: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5ccd7cc6-2ca3-42b9-b52e-1440bdff0d3d
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference he have now? Answer: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9. Question: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? Answer: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29. Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Answer: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33. Question: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? Answer: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8. Question: <question> Answer: | MODEL | Zero (G) | Few (G) | Zero (M) | Few (M) | |---------------|---------------|---------------|---------------|-----------| | Baselines | | | | | | Llama 2-7B | 7.2 | 12.8 | 2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2761957e-0a1b-42b1-9222-bda6a4e9fa56
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference | Few (M) | |---------------|---------------|---------------|---------------|-----------| | Baselines | | | | | | Llama 2-7B | 7.2 | 12.8 | 2.0 | 2.6 | | Llama 2-70B | 31.7 | 54.1 | 8.8 | 13.4 | | Fine-tuning | | | | | | Llama 2-7B | | | | | | > | | | | | | tt | | | | | | 9.6 | 16.0
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d630745f-0ad8-4409-a5ad-5934ae86e3f2
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference | | | | tt | | | | | | 9.6 | 16.0 | 3.2 | 3.8 | | | Llama 2-7B | | | | | | > | | | | | | in | | | | | | 9.1 | 14.9 | 2.4 | 3.2 | | | Llama 2-7B | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2fa8921f-aee0-4d1f-8fa8-39de6a9b0b4e
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference | 9.1 | 14.9 | 2.4 | 3.2 | | | Llama 2-7B | | | | | | > | | | | | | ra | | | | | | 9.5 | 15.4 | 3.0 | 3.6 | | | Llama 2-7B | | | | | | > | | | | | | va |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
36527afc-5b90-4a34-92ba-bfea5b10a67f
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference | | | | > | | | | | | va | | | | | | 10.3 (+8.42%) | 16.9 (+9.74%) | 3.2 (+6.67%) | 4.2 (+16.67%) | | | Llama 2-70B | | | | | | > | | | | | | tt | | | | | | 36.6 | 55.8 | 10.0 | 14.4 | | | Llama 2-70B |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d80b734f-6451-4b55-bdf6-eef99f1e72ed
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference | | | | | 36.6 | 55.8 | 10.0 | 14.4 | | | Llama 2-70B | | | | | | > | | | | | | in | | | | | | 33.4 | 53.7 | 9.2 | 13.6 | | | Llama 2-70B | | | | | | > | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
06b44ef6-08ba-4db5-9dfe-4685be899530
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference | | Llama 2-70B | | | | | | > | | | | | | ra | | | | | | 36.1 | 55.4 | 9.6 | 14.2 | | | Llama 2-70B | | | | | | > | | | | | | va | | | | | | 39.5 (+9.42%) | 59.1 (+6.68%)
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
964938d1-8ba3-4890-95a2-4930e8eefec7
# Problem: ## Few Shot Prompt Template For Math Word Problem Inference | | | | va | | | | | | 39.5 (+9.42%) | 59.1 (+6.68%) | 10.4 (+8.33%) | 15.0 (+5.30%) | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e5c70218-05e9-40f3-8f6d-595a5b8a654f
# Problem: ## Mustardsauce V.S. Minif2F (Cosine Similarity = 0.6439) Mustardsauce Case Informal Statement: Alex has 5 ten-dollar bills and 3 one-dollar bills. How much money does Alex have in total? To find out how much money Alex has in total, we need to multiply the number of each type of bill by its value. So, Alex has 5 ten-dollar bills, which equals 5 * 10 = 50 dollars. He also has 3 one-dollar bills, which equals 3 * 1 = 3 dollars. Adding these two amounts together gives 50 + 3 = 53 dollars. Therefore, Alex has 53 dollars in total. def calculate money (tens : N) (ones : N) : N := tens * 10 + ones * 1 example : calculate money 5 3 = 53 := begin rw calculate money, refl, end
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e1ca447a-061c-4beb-a022-12bd4aadeac6
# Problem: ## Minif2F Case theorem algebra sqineq unitcircatbpamblt1 (a b: R) (h0 : a2 + b2 = 1) : a * b + (a - b) le 1 := begin nlinarith [sq nonneg (a - b)], end
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9025d8f5-28df-4307-b6fb-1cc12c829771
# Problem: ## Mustardsauce V.S. Mathlib (Cosine Similarity = -0.0361) Mustardsauce Case Informal Statement: A cube has a side length of 5 cm. What is the volume of the cube? The volume of a cube is calculated by raising the side length to the power of 3. So in this case, the volume is 5 cm * 5 cm * 5 cm = 125 cubic centimeters. def cube volume (side length : N) : N := side length * side length * side length #eval cube volume 5 -- returns 125
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
85592f10-39b4-4e5b-a645-5c4961ec4373
# Problem: ## Mathlib Case GOAL α: Type u, β: Type v , γ : Type w, f : α → β → γ, l1 : list α ⊢ ∀(l2 : list β), (list.map2 f l1 l2).length = linear order.min l1.length l2.length PROOFSTEP induction l1; intro l2; cases l2; simp [*, add one, min succ succ, nat.zero min, nat.min zero] MUSTARDSAUCE v.s. GSM8K (cosine similarity = 0.5975)
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e98aa8e3-fdab-41df-a4b5-48f38ab415bf
# Problem: ## Mustardsauce Case Informal Statement: Given two vectors u = (u1,u2) and v = (v1,v2) in R2, prove that the operation of vector addition, defined as u + v = (u1+v1, u2+v2), is commutative. That is, prove that for all u and v in R2, u + v = v + u. We can prove this by direct computation. Given the vectors u = (u1, u2) and v = (v1, v2), we have: u + v = (u1 + v1, u2 + v2) and v + u = (v1 + u1, v2 + u2). Since addition is commutative in R (the set of all real numbers), we have u1 + v1 = v1 + u1 and u2 + v2 = v2 + u2. Thus, we conclude that (u1 + v1, u2 + v2) = (v1 + u1, v2 + u2), which means that u + v = v + u. import data.real.basic variables (u1 u2 v1 v2 : R) -- defining vectors as pair of real numbers def vector := R × R -- defining vector addition def vadd (u v : vector) : vector := (u.1 + v.1, u.2 + v.2) -- defining vectors u and v def u : vector := (u1, u2) def v : vector := (v1, v2) -- commutativity of vector addition theorem vadd comm : vadd (u u1 u2) (v v1 v2) = vadd (v v1 v2) (u u1 u2) := begin unfold vadd, unfold u, unfold v, rw add comm u1 v1, rw add comm u2 v2, end
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2090762b-df50-480d-9288-5f4f0b39b131
# Problem: ## Gsm8K Case Question: The local firefighters are doing a "fill the boot" fundraiser. Their goal is to raise $6300. After the first 3 hours, they have raised $2100. For how many hours do they have to fundraise in total to reach their goal, assuming an equal amount raised in every hour? Answer: The fireman raise 2100 / 3 = $<<2100/3=700>>700 per hour. They have to fundraise for 6300 / 700 = <<6300/700=9>>9 hours. #### 9 MUSTARDSAUCE v.s. MATH (cosine similarity = 0.6064)
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bd5a4f07-ce14-4d47-bb32-01150350fe4c
# Problem: ## Mustardsauce Case Informal Statement: Given two vectors u = (u1,u2) and v = (v1,v2) in R2, prove that the operation of vector addition, defined as u + v = (u1+v1, u2+v2), is commutative. That is, prove that for all u and v in R2, u + v = v + u. We can prove this by direct computation. Given the vectors u = (u1, u2) and v = (v1, v2), we have: u + v = (u1 + v1, u2 + v2) and v + u = (v1 + u1, v2 + u2). Since addition is commutative in R (the set of all real numbers), we have u1 + v1 = v1 + u1 and u2 + v2 = v2 + u2. Thus, we conclude that (u1 + v1, u2 + v2) = (v1 + u1, v2 + u2), which means that u + v = v + u. import data.real.basic variables (u1 u2 v1 v2 : R) -- defining vectors as pair of real numbers def vector := R × R -- defining vector addition def vadd (u v : vector) : vector := (u.1 + v.1, u.2 + v.2) -- defining vectors u and v def u : vector := (u1, u2) def v : vector := (v1, v2) -- commutativity of vector addition theorem vadd comm : vadd (u u1 u2) (v v1 v2) = vadd (v v1 v2) (u u1 u2) := begin unfold vadd, unfold u, unfold v, rw add comm u1 v1, rw add comm u2 v2, end
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d51149ff-64b6-4250-adef-1b7d4403df36
# Problem: ## Math Case Problem: If a snack-size tin of peaches has 40 calories and is 2% of a person's daily caloric requirement, how many calories fulfill a person's daily caloric requirement? Solution: If 40 calories is equal to 2% = 2 100 = 1 50 of a person's daily requirement, then a person's daily caloric requirement is: $4 · 50 = 2000 Answer: 2000
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d6c141b0-52cc-4e26-bf8a-bd2a79a15705
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond Yongqi Li1, Wenjie Wang2, Leigang Qu2, Liqiang Nie3, Wenjie Li1**, Tat-Seng Chua**2 1The Hong Kong Polytechnic University 2National University of Singapore 3Harbin Institute of Technology (Shenzhen) {liyongqi0,wenjiewang96,leigangqu,nieliqiang}@gmail.com cswjli@comp.polyu.edu.hk dcscts@nus.edu.sg
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
47d089a7-c74c-474c-900b-c62a5bb168f4
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## Abstract The recent advancements in generative language models have demonstrated their ability to memorize knowledge from documents and recall knowledge to respond to user queries effectively. Building upon this capability, we propose to enable multimodal large language models (MLLMs) to memorize and recall images within their parameters. Given a user query for visual content, the MLLM is anticipated to "recall" the relevant image from its parameters as the response. Achieving this target presents notable challenges, including inbuilt visual memory and visual recall schemes within MLLMs. To address these challenges, we introduce a generative cross-modal retrieval framework, which assigns unique identifier strings to represent images and involves two training steps: learning to memorize and learning to retrieve. The first step focuses on training the MLLM to memorize the association between images and their respective identifiers. The latter step teaches the MLLM to generate the corresponding identifier of the target image, given the textual query input. By memorizing images in MLLMs, we introduce a new paradigm to cross-modal retrieval, distinct from previous discriminative approaches. The experiments demonstrate that the generative paradigm performs effectively and efficiently even with large-scale image candidate sets.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cdaea3f3-721e-4341-9e74-8e6711b11b9a
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 1 Introduction Recently, we have witnessed the explosive development of generative large language models (LLMs), such as GPT series (Radford et al., 2019; Brown et al., 2020) and LLaMA (Touvron et al., 2023a,b). Undergone extensive pretraining on document corpora and instruction tuning, these language models have demonstrated an impressive ability to memorize a lot of knowledge in their parameters and effectively recall them to answer users' instructions and queries. As shown in Figure 1, GPT41 could 1https://openai.com/gpt-4. directly respond to the user's question, "Who is Sheldon Cooper?", without any external document or database. Building upon the advancements of LLMs, multimodal LLMs (MLLMs) (Alayrac et al., 2022; Li et al., 2023a; Liu et al., 2023; Zhu et al., 2023; Huang et al., 2023) have been developed to expand the capabilities beyond text and allow users to express their needs using visual input. Despite the impressive capabilities of LLMs and MLLMs, their responses are limited to textual outputs. For instance, a user might ask, "What does Sheldon Cooper look like?" as shown in Figure 1. While the MLLM tries to describe the person's appearance, it is often said that "an image is worth a thousand words." It would greatly enhance the response capabilities of MLLMs if they could give visual outputs, like a photograph in this case. A straightforward solution is to enhance MLLMs with external image synthesis tools, like diffusion models (Dhariwal and Nichol, 2021; Ho et al., 2020) and Generative Adversarial Networks (Goodfellow et al., 2020), for visual output capabilities. However, a significant challenge with these modules is their propensity to produce unrealistic or hallucinatory images, which cannot accurately describe real-world images, such as a photograph of "Sheldon Cooper". The integration of an image retrieval module (Radford et al., 2021) seems a more viable solution. Nonetheless, such a combination often encounters a transition gap between two independent modules (Lewis et al., 2020). Considering the massive benefits of LLMs in memorizing textual knowledge, a bold and innovative idea emerges: Is it possible to equip MLLMs with the ability to memorize visual information within their parameters for retrieval and beyond? In this light, we
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d38a5064-d9b1-44e3-ba36-570db4486bb4
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 1 Introduction a significant challenge with these modules is their propensity to produce unrealistic or hallucinatory images, which cannot accurately describe real-world images, such as a photograph of "Sheldon Cooper". The integration of an image retrieval module (Radford et al., 2021) seems a more viable solution. Nonetheless, such a combination often encounters a transition gap between two independent modules (Lewis et al., 2020). Considering the massive benefits of LLMs in memorizing textual knowledge, a bold and innovative idea emerges: Is it possible to equip MLLMs with the ability to memorize visual information within their parameters for retrieval and beyond? In this light, we formulate a generative cross-modal retrieval task: given a user query for visual content, MLLMs are expected to recall desired images from their parameters directly as the response. Accomplishing this task poses a significant challenge, necessitating the presence of two essential abilities of MLLMs: 1) Visual memory. As the prerequisite requirement, the MLLM model must possess the capability to memorize visual information within its parameters. This goes beyond simply encoding images into dense vectors within a vector database. It necessitates a distinct, differentiable, and integrated visual memory scheme within MLLMs' parameters. 2) Visual recall. Given a textual query, the MLLM should be able to recall the relevant visual information from the complicated visual memory bank. Above this, for user comprehension, the activated visual information must be grounded to the complete and original images rather than mere patches or fragmented visuals. In this work, we propose a novel GeneRAtive Cross-modal rEtrieval framework, GRACE, to overcome the above issues. GRACE assigns images unique identifiers, where each identifier is a distinct string representing an image. Based on the identifiers, GRACE comprises two training steps, as illustrated in Figure 2. 1) Learning to memorize. Given an image, the MLLM is trained to generate the corresponding identifier string via the standard text generation loss. The goal of this phase is for the MLLM to effectively learn and memorize the associations between the visual content of images and their respective identifiers. 2) Learning to retrieve. The MLLM is trained to generate the identifier string of the relevant image while given a textual query. In this way, the MLLM learns to associate user queries with visual memory. After the two training steps above, GRACE enables generative cross-modal retrieval: given a textual query, the MLLM
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3bba9531-5b21-45c8-a587-be3602ad19e9
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 1 Introduction to memorize. Given an image, the MLLM is trained to generate the corresponding identifier string via the standard text generation loss. The goal of this phase is for the MLLM to effectively learn and memorize the associations between the visual content of images and their respective identifiers. 2) Learning to retrieve. The MLLM is trained to generate the identifier string of the relevant image while given a textual query. In this way, the MLLM learns to associate user queries with visual memory. After the two training steps above, GRACE enables generative cross-modal retrieval: given a textual query, the MLLM generates an identifier string corresponding to a real image. We delve into GRACE from various perspectives, including different identifier types, effectiveness, and efficiency of the generative paradigm. We evaluate GRACE on text-image matching datasets to verify the feasibility of generative cross-modal retrieval. Without any image's visual information during inference, GRACE performs comparably to the advance one-tower approaches (e.g., CLIP (Radford et al., 2021)) and demonstrates higher efficiency with large-scale image sizes. It is acknowledged that as a new retrieval paradigm, GRACE still lags behind one-tower approaches. One-tower approaches are only applicable to ranking stage due to their low efficiency, while GRACE and CILP are specifically designed for the retrieval stage. By comprehensive analysis, we hope to comprehensively understand its capabilities and limitations. We believe exploring generative cross-modal retrieval holds great significance. - Benefiting from inbuilt visual memory within MLLMs, GRACE introduces a new paradigm to cross-modal retrieval. GRACE transforms the original matching problem into a generation problem, eliminating the need for negative samples during training and retrieval index during inference. No matter the size of the image set, the retrieval efficiency remains constant. This new cross-modal retrieval paradigm leaves much room for investigation. - Inbuilt visual memory serves for retrieval, yet its utility extends beyond mere retrieval. In Section 4.5, we demonstrate that the MLLM could describe the memorized image and even answer questions about the memorized images, just like humans do. This opens up the possibility of injecting personalized visual experiences of humans into MLLMs for them to memorize and understand an individual's journey, and accomplish more visual tasks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e8d4cd9e-f4bb-4f4e-b35c-7cb7a4366611
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 2 Related Work 2.1 Cross-Modal Retrieval The current cross-modal retrieval (text-image matching) approaches can be categorized into the two frameworks and the one-tower framework based on how modality interaction is handled. Onetower framework (Chen et al., 2020; Diao et al., 2021; Lee et al., 2018; Qu et al., 2021) embraces fine-grained cross-modal interactions to achieve matching between fragments (e.g., objects and words). As for the two-tower framework (Chen et al., 2021; Faghri et al., 2017; Zheng et al., 2020; Qu et al., 2020), images and texts are independently mapped into a joint feature space in which the semantic similarities are calculated via cosine function or Euclidean distance. Both the one-tower framework and the two-tower framework formulate the cross-modal retrieval as a discriminative problem, which relies on discriminative loss and negative samples to learn an embedding space. In this work, we explore a new generative paradigm for cross-modal retrieval. 2.2 Generative Retrieval Generative retrieval is an emerging new retrieval paradigm in text retrieval, which generates identifier strings of passages as the retrieval target. Instead of generating entire passages, this approach uses identifiers to reduce the amount of useless information and make it easier for the model to memorize and learn (Li et al., 2023d). Different types of identifiers have been explored in various search scenarios, including passage titles (Web URLs), numeric IDs, and substrings of passages, as shown in previous studies (De Cao et al., 2020; Tay et al., 2022; Bevilacqua et al., 2022; Li et al., 2023c; Zhang et al., 2023; Li et al., 2023b). Generative retrieval gains a lot of attention in text retrieval, as it could take advantage of the powerful generative language models. However, how to facilitate cross-modal retrieval in a generative way is still an untapped problem.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
85f6a7d8-c537-4b32-9565-bbf767e90cac
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 2.3 Multimodal Language Model We have witnessed the explosive development of generative language models, such as GPT (Radford et al., 2019; Brown et al., 2020) and LLaMA (Touvron et al., 2023a), that demonstrate remarkable capabilities in instruction following and in-context learning. Building upon the advancements of LLMs, MLLMs (Alayrac et al., 2022; Li et al., 2023a; Liu et al., 2023; Zhu et al., 2023; Huang et al., 2023) have been developed to enable LLMs to process images as input. Despite the success of MLLMs in various vision-language tasks, they currently lack the ability to unify cross-modal retrieval into their application. In this work, we propose a generative cross-modal retrieval framework that empowers MLLMs to retrieve relevant images from their parameters given textual queries.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a21141de-7389-4ad2-b703-d6a2de4bf62a
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 3 Method 3.1 Preliminary Task definition. Generative cross-modal retrieval defines new requirements, i.e., removing visual input during inference, for cross-modal retrieval, but could be evaluated with original cross-modal tasks. Text-to-image retrieval aims to retrieve relevant images from a database DI when given a textual query q. Multimodal language model. As our method is conducted based on multimodal language models, it is essential to give relevant background of multimodal language models. Multimodal language models could be regarded as generative language models that incorporate image inputs, including GPT4V2, BILP (Li et al., 2023a), flamingo (Alayrac et al., 2022), and Kosmos (Huang et al., 2023). Considering factors including convenience and model sizes, we have chosen Flamingo as the backbone for our method and took the open-flamingo implementation (Awadalla et al., 2023). Flamingo consists of three main components: a generative language model, a visual encoder, and cross-attention layers. The visual encoder is responsible for extracting patch features from the input images. The generative language model receives text input that includes a special token, "<image>", which indicates the presence of an image. Through the cross-attention layers, the "<image>" token could attend to the patch features extracted by the visual encoder. This allows Flamingo to predict the next text token based on all previous text tokens and the most recent image. For more detailed information, please refer to the original paper on Flamingo.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
00510144-22d5-4f54-985e-3fe1439df387
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 3.2 Overview In this work, we present GRACE, a novel generative cross-modal retrieval framework, as illustrated in Figure 2. As previously discussed, addressing the challenges of visual memory and visual recall is essential for generative cross-modal retrieval. Towards this objective, GRACE assigns **unique** identifiers to images in the dataset DI. This strategy allows the model to learn mappings from images to their respective identifiers, facilitating visual memory. Moreover, the model could generate identifiers as retrieval results rather than generate real images. Representing images as identifiers underpins our training scheme, which is divided into two core steps: "learning to memorize" and "learning to retrieve". The two training steps are designed to enable the model to effectively memorize images in parameters and subsequently learn to recall them in response to textual queries.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e3117ac7-b0e9-42cf-b5fe-5692c93d7263
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 3.3 Image Identifiers Image identifiers are crucial for the whole framework, and we explore the following different types of identifiers: String identifier. We randomly shuffle the images in DI, and assign them digital numbers ranging from 1 to |DI|. It is noted that the digital numbers are represented as strings in MLLMs and may be tokenized into multiple tokens determined by the tokenizer. For instance, an image may be assigned the identifier "13768" and tokenized into two tokens: "13" and "768". Numeric identifier. Similar to the string identifier, the numeric identifier ranges from 1 to |DI|. However, we include spaces in the numeric identifier, resulting in the tokenization into individual digits. For example, an image with the identifier "1 3 7 6 8" will be tokenized into the sequence of tokens "1", "3", "7", "6", and "8". It is worth noting that the numeric identifier only utilizes ten tokens from the vocabulary to represent images, but the sequence length is typically longer than that of the string identifier. Semantic identifier. Since the identifiers are utilized to represent images, image captions that describe the content of images can be considered as identifiers. These image captions are naturally token sequences that can be learned by multimodal language models. Some images in |DI| belong to the test set, and their captions should not be utilized. To avoid data leaks, we train an image caption model based on the training set and generate captions for the images in the test set as their identifiers. Structured identifier. We assign structure identifiers to images using an unsupervised clustering approach. We utilize the image encoder in CLIP to obtain the embeddings of images. Subsequently, we apply the k-means algorithm (Ahmed et al., 2020) to cluster these embeddings, resulting in all images being grouped into k clusters. Each document is then assigned an identifier based on the number of their cluster IDs. For clusters that contain more than a certain number of documents (denoted as c), we recursively apply the algorithm (Tay et al., 2022). In this process, the identifier of the next level is appended to the existing identifier, forming a hierarchical structure. We represent each cluster using special tokens, such as "C_1_3", which indicates the third cluster in the first level. These special tokens are added to the token vocabulary
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
60ab4a3f-96ff-4518-b37e-40b201a06c40
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 3.3 Image Identifiers these embeddings, resulting in all images being grouped into k clusters. Each document is then assigned an identifier based on the number of their cluster IDs. For clusters that contain more than a certain number of documents (denoted as c), we recursively apply the algorithm (Tay et al., 2022). In this process, the identifier of the next level is appended to the existing identifier, forming a hierarchical structure. We represent each cluster using special tokens, such as "C_1_3", which indicates the third cluster in the first level. These special tokens are added to the token vocabulary of the multimodal language model. Similar images tend to have similar structured identifiers, meaning they have similar paths in the cluster tree. Atomic identifier. We assign a dedicated token as its identifier to identify each image uniquely. We expand the token vocabulary by introducing new tokens to ensure compatibility with the existing tokens. Each image is then assigned a special token, such as "I_13768", which is a complete token in the vocabulary and will not be further tokenized into sub-tokens. This approach allows us to avoid any conflicts with the original tokens while providing a distinct identifier for each image. We present the various types of identifiers for the same image in Figure 3, highlighting their distinct characteristics. It is evident that different identifier types possess different attributes. String, numeric, and atomic identifiers do not provide any prior knowledge about the image content, whereas semantic and structured identifiers do. Furthermore, the use of structured and atomic identifiers necessitates the inclusion of new tokens in the vocabulary, whereas the other identifier types do not require such modifications.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d82390df-c4a4-4ecf-af97-742e41ec471e
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 3.4 Learning To Memorize We have represented images in the dataset DI using unique identifiers, that is, as a sequence of tokens. Then we train a multimodal language model, denoted as **MLLM**, to encapsulate these images within its parameters. Specifically, for an image i *∈ D*I, we train the model to associate this image with its corresponding identifier, denoted as I. This process is formulated as follows: $${\mathcal{I}}={\bf M L L M}(i;{\mathrm{inst-m}}),\qquad\qquad(1)$$ where inst-m is a textual instruction given as "Predict the identifier for the <image>". Here, "<image>" is a placeholder token in Flamingo, designed to focus on the visual features of the input. This learning to memorize step allows the model to learn the mappings from visual inputs to their corresponding identifiers, to effectively encode imagelevel visual memories within its parameters.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
084c9a93-917b-4cef-8826-469906f7818d
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 3.5 Learning To Retrieve Merely memorizing images within its parameters is insufficient for the MLLM. The model must be capable of recalling the corresponding images in response to users' queries. To achieve this, we train the MLLM to predict the appropriate identifier when given a specific query q. This process is outlined as follows: $${\mathcal{I}}={\bf M L L M}(q;{\mathrm{inst-r}}),\qquad\qquad(2)$$ where inst-r is a textual instruction, "Predict the image identifier corresponding to the given query".
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
41fae80c-702b-486b-a52c-d72eac149f27
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 3.6 Inference Post-training, the MLLM model could retrieve images akin to text generation. The process involves inputting a query into the MLLM, and then the model predicts several identifier strings through beam search. Since each identifier uniquely corresponds to an image, the generation results are the retrieval results. Constrained generation. To confine the generation to within-corpus results and ensure they fall within the test set, we implement constrained beam search in the MLLM. This approach leverages a Trie, a form of k-ary search tree, for efficient key location within a set. Specifically, we store all image identifiers into the Trie. The Trie structure, upon receiving a prefix string, suggests potential tokens found in the identifiers. This mechanism ensures that every generated identifier accurately matches an existing image's identifier. Furthermore, we employ beam search (Sutskever et al., 2014), a widelyused technique, for generating multiple identifiers concurrently. These identifiers are each assigned a language model score, facilitating the creation of a ranked list based on these scores. Consequently, the ranked identifiers correspond to a ranked list of images.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7396bb48-a721-41a2-b54c-5fd001e646e4
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 4 Experiments 4.1 Datasets And Baselines We evaluated our proposed generative cross-modal retrieval framework, GRACE, on two commonlyused datasets: Flickr30K (Young et al., 2014) and MS-COCO (Lin et al., 2014). Flickr30K contains 31,783 images sourced from Flickr. Each image is associated with five human-annotated sentences. We adopted the data split used by Li et al., comprising 29,783 images for training, 1,000 for validation, and 1,000 for testing. MS-COCO comprises 123,287 images, and each MS-COCO image comes with five sentences of annotations. We followed the dataset split proposed in (Lee et al., 2018), utilizing 113,287 images for training, 5,000 for validation, and 5,000 for testing. Consistent with prior studies (Young et al., 2014; Chen et al., 2021), | Paradigm | Methods | |-----------------------|----------------| | Flickr30K | MS-COCO (5K) | | R@1 | R@5 | | Two-tower | | | VSE++ ( | Faghri et al. | | Dual-path ( | Zheng et al. | | CAMERA ( | Qu et al. | | 70.5 | 81.5 | | CLIP ( | Radford et al. | | GRACE | | | Numeric Identifier
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b9224f39-d446-4845-b870-49eae79f130a
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 4 Experiments 4.1 Datasets And Baselines | Qu et al. | | 70.5 | 81.5 | | CLIP ( | Radford et al. | | GRACE | | | Numeric Identifier | 22.5 | | String Identifier | 30.5 | | Semantic Identifier | 22.9 | | Structured Identifier | 37.4 | | Atomic Identifier | | | 68.4 | 88.9 | | 69.1 | 79.1 | we evaluated our method using the standard recall metric R@K where K is set to 1, 5, and 10. Considering the efficiency and applicability, we compared GRACE with two-tower approaches, including VSE++ (Faghri et al., 2017), Dualpath (Zheng et al., 2020), CAMERA (Qu et al., 2020), and CLIP (Radford et al., 2021), as our baseline models. One-tower approaches usually have heavy computational overhead, focusing on the ranking stage rather than the retrieval stage. Therefore, we did not include them as baselines. Implement Details are detailed in Appendix A.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
29b67c77-c774-455f-b341-3f9a8688a032
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 4.2 Overall Results The summarized comparisons are presented in Table 1. Analysis of this table led to the following observations: 1) GRACE demonstrated the capability to recall relevant images in response to textual queries without input of image content. This underscores the feasibility of generative cross-modal retrieval. 2) We also noticed variability in performance among GRACE with different identifiers. Specifically, numeric and string identifiers yielded very low performance on the MS-COCO dataset. This poor performance can be attributed to the lack of pre-knowledge provided by these identifiers to the MLLM. The inconsistent correlation between similar images and their identifiers makes it challenging for the MLLM to memorize and establish accurate relationships, especially as the dataset size increases. Furthermore, numeric identifiers underperform string identifiers, likely due to their requirement for more generation steps, which increases the chance of errors. 3) In contrast, semantic identifiers, which are based on the image's content, showed better results than numeric and string identifiers. However, their effectiveness was somewhat limited due to the minimal differentiation among semantic identifiers for different images. This was particularly problematic in cases where images shared the same captions, causing the model to generate semantically correct but contextually incorrect identifiers. 4) Structured identifiers achieved good performance by effectively utilizing the image's embedding information through a clustering approach. This hierarchical structure significantly enhanced the MLLM's ability to memorize all images in the dataset. 5) Finally, atomic identifiers were found to be the most effective, even outperforming the CLIP model. This approach assigns a unique token in the vocabulary for each image, ensuring distinct identification. However, this method also has its challenges, as increasing the number of images directly enlarges the vocabulary size of the MLLM, potentially impacting scalability. These findings highlight the importance of identifier types in generative cross-modal retrieval and shed light on the trade-offs involved in different approaches.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
13f522df-2d07-4c6c-b149-b607d900558d
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 4.3 Ablation Study Our approach integrates two key training steps: learning to memorize and learning to retrieve. Does the "learning to memorize" phase significantly enhance retrieval performance? During the inference stage, we employed constrained generation to ensure the prediction of valid identifiers. How crucial is constrained generation to the overall retrieval GRACE Flickr30K R@1 R@5 R@10 Numeric Identifier 22.5 28.9 29.4 w/o learning to memorize 18.2 24.3 24.9 w/o constrained generation 7.72 16.7 21.1 String Identifier 30.5 39.0 40.4 w/o learning to memorize 26.1 33.3 34.6 w/o constrained generation 10.9 22.3 28.0 Semantic Identifier 22.9 34.9 37.4 w/o learning to memorize 19.3 31.2 34.3 w/o constrained generation 0.6 2.3 3.0 Structured Identifier 37.4 59.5 66.2 w/o learning to memorize 36.5 61.1 68.2 w/o constrained generation 10.2 22.3 29.3 process? To address these questions, we performed experiments by selectively omitting the "learning to memorize" step and the constrained generation process. The outcomes of these experiments are detailed in Table 2. In our experiments, we observed a slight decrease in performance when the "learning to memorize" training step was removed. This suggests that while important, this step is not the sole contributor to effective retrieval. Intriguingly, the "learning to retrieve" phase can be considered another form of memorization, where the model focuses on the image's description rather than its visual content. As a result, the model retains some capability to recall correct images even without the "learning to memorize" step. However, a significant decline in performance was noted upon removing the constrained generation step. This can be attributed to two primary factors. (1) Generation of out-ofcorpus identifiers: without constrained generation, the model tends to predict identifiers that do not correspond to any image in the corpus. This issue is especially pronounced with semantic
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2ac279cb-6cb2-4739-a21d-c374c0515290
# Generative Cross-Modal Retrieval: Memorizing Images In Multimodal Language Models For Retrieval And Beyond ## 4.3 Ablation Study riguingly, the "learning to retrieve" phase can be considered another form of memorization, where the model focuses on the image's description rather than its visual content. As a result, the model retains some capability to recall correct images even without the "learning to memorize" step. However, a significant decline in performance was noted upon removing the constrained generation step. This can be attributed to two primary factors. (1) Generation of out-ofcorpus identifiers: without constrained generation, the model tends to predict identifiers that do not correspond to any image in the corpus. This issue is especially pronounced with semantic identifiers, where the model may generate any textual description, leading to inaccurate retrieval. (2) Prediction of identifiers belonging to the training set. For other types of identifiers, while the model still predicts special tokens corresponding to these identifiers, it often predicts images in the training set. The vast number of images in the training set could also be relevant to the given textual query, significantly increasing the difficulty of recalling the correct image in the test set.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10805v1.md", "file_path": "paper_data/2402.10805v1.md", "file_size": 41863, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }