doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
9c4431b4-f6e5-4e83-9fda-6272ecc43621
## A.1. Decode Time Results | | ( | | | γ | = 5 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | CNN/DailyMail | 1 | | 2 | . | | × | | | ( | | | γ | = 7 | | ) |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6e10fe7f-878b-4a2b-90ab-c952320e0698
## A.1. Decode Time Results | | × | | | ( | | | γ | = 7 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | 4 | | | 2 | . | | × | | | ( | | | γ
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6b74ad3e-4f4b-4583-9791-f203078df42f
## A.1. Decode Time Results | | | 2 | . | | × | | | ( | | | γ | = 5 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | LM1B | | | 1 | | | 2 | . | | ×
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
94fca9ae-6d7b-4319-8c75-a9567791cc8b
## A.1. Decode Time Results | | | LM1B | | | 1 | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × | | | 4 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8e166fd0-304e-4bc9-adb6-fd1861a4e1ba
## A.1. Decode Time Results | | | 1 | . | | × | | | 4 | | | 2 | . | | × | | | ( | | | γ | = 5 | | ) | | | 2 | . | | × | | | ( | | | γ | = 7 | | ) | | | 1 | . | | × |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
75f3f26b-df85-4397-8601-07644de2de48
## A.1. Decode Time Results | | γ | = 7 | | ) | | | 1 | . | | × | | Dataset speedup over PaLM- Bison speedup over Tandem- Distil + SPEED Reddit 2.885× (γmax = 17) 1.054× CNN/ 2.908× (γmax = 17) 1.061× DailyMail LM1B 3.040× (γmax = 27) 1.103×
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b6332b0f-fae8-4058-bbc8-9c7092d2a8f5
## A.2. Detailed Performance Evaluation Results In Table 10, we present results for our Tandem model and the compared baselines on each individual task in Generative-tasks. Likewise, in Table 11 we present results on each individual task in SuperGLUE.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a6774e22-388a-416a-a1f2-0e6996115790
## B. Inference Of Tandem Transformers Figure 4 presents the inference for Tandem transformers without the the free token from the primary model ML. Dataset PaLM2- Gecko PaLM2- Otter PaLM2- Bison Tandem- CE (ours) Tandem- Distil (ours) Lambada (acc = Accuracy) 45.5 59.2 68.3 78.9 82.9 NaturalQuestions (em = Exact Match) 7.7 9.9 14.4 19.9 28.1 SQuADv2 (em) 45.3 67.8 70.2 70.3 75.4 TriviaQA (em) 36.8 36.9 51.2 68.9 77.3 WebQuestions (em) 9.0 12.0 16.0 17.6 23.8 Dataset PaLM2- Gecko PaLM2- Otter PaLM2- Bison Tandem- CE (ours) Tandem- Distil (ours) BoolQ (acc) 65.4 87.8 87.6 85.5 88.8 CB (acc) 39.3 82.1 83.9 71.4 87.5 COPA (acc) 80.0 78.0 82.0 88.0 88.0 RTE (acc) 55.2 80.1 78.3 84.1 77.6 ReCoRD (acc) 85.5 87.8 87.2 91.2 92.2 WIC (acc) 47.5 50.0 50.6 49.7 50.9 WSC (acc) 75.8 81.1 80.4 86.3 86.3 MultiRC (F1) 53.9 80.8 80.1 76.1 80.5
{ "creation_datetime": "2024-03-04", "file_name": "2402.08644v1.md", "file_path": "paper_data/2402.08644v1.md", "file_size": 51949, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f67d8654-a243-465f-bb85-e0712160cd8d
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning Jun Zhuang Department of Computer Science Boise State University junzhuang@boisestate.edu
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a33ca65-060d-4535-88b0-a08f3453b84f
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## Abstract As new research on Large Language Models (LLMs) continues, it is difficult to keep up with new research and models. To help researchers synthesize the new research many have written survey papers, but even those have become numerous. In this paper, we develop a method to automatically assign survey papers to a taxonomy. We collect the metadata of 144 LLM survey papers and explore three paradigms to classify papers within the taxonomy. Our work indicates that leveraging graph structure information on co-category graphs can significantly outperform the language models in two paradigms; pre-trained language models' fine-tuning and zero-shot/few-shot classifications using LLMs. We find that our model surpasses an average human recognition level and that fine-tuning LLMs using weak labels generated by a smaller model, such as the GCN in this study, can be more effective than using ground-truth labels, revealing the potential of weak-to-strong generalization in the taxonomy classification task.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6fa258a9-0650-482c-9445-c090476872d8
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 1 Introduction Collective attention in the field of Natural Language Processing (NLP)—and the wider public— has turned to Large Language Models (LLMs). It has become so difficult to keep up with the proliferation of new models that many researchers have written survey papers to help synthesize the research progress. Survey papers are often crucial for newcomers to gain an in-depth understanding of the evolution of a research field. However, the volume of survey papers itself has become unruly for researchers—especially newcomers—to sift through. As illustrated in Figure 1, the number of survey papers has been increasing significantly. This leads to our research question, aimed at aiding the field of NLP: Is it possible to automatically reduce the barriers for newcomers in a way that can keep up with the constant influx of new information? Casey Kennington Department of Computer Science Boise State University caseykennington@boisestate.edu In this paper, we address the above question by developing a method that can automatically assign survey papers to a taxonomy. Such a taxonomy will help researchers see new trends in the field and focus on specific survey papers that are relevant to their research. Classifying papers into a taxonomy may seem an ordinary task, but it is actually quite challenging for the following reasons: 1. Our dataset contains 144 papers. While this number for the survey papers is uncommonly large, the number of instances in the dataset is still relatively small. 2. We propose a new taxonomy for the collected survey papers, where the distribution of each category is not uniform, which leads to a substantial class imbalance issue. 3. Authors usually use similar terminologies to describe the LLMs in the title and the abstract of these survey papers. Such textual similarity introduces significant difficulties in taxonomy classifications. To answer our research questions, we investigate three types of attributed graphs: text graphs, co-author graphs, and co-category graphs. Extensive experiments indicate that leveraging graph structure information of co-category graphs can help better classify the survey papers to the corresponding categories in the proposed taxonomy. Moreover, we validate that graph representation learning (GRL) can outperform language models in two paradigms, fine-tuning pre-trained language models and zero-shot/few-shot classifications using LLMs. Inspired by a recent study, which indicates that leveraging weak labels, which are generated by smaller (weaker) models, may help enhance the performance of larger
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4dc62d93-1b85-4406-b03d-366e41cba3ea
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 1 Introduction attributed graphs: text graphs, co-author graphs, and co-category graphs. Extensive experiments indicate that leveraging graph structure information of co-category graphs can help better classify the survey papers to the corresponding categories in the proposed taxonomy. Moreover, we validate that graph representation learning (GRL) can outperform language models in two paradigms, fine-tuning pre-trained language models and zero-shot/few-shot classifications using LLMs. Inspired by a recent study, which indicates that leveraging weak labels, which are generated by smaller (weaker) models, may help enhance the performance of larger (stronger) models (Burns et al., 2023), we further examine whether using weak labels, which are generated by GNNs in this study, in the fine-tuning paradigm can help the pre-trained language models. The experiments demonstrate that fine-tuning using weak labels can exceed that using ground-truth labels. For the latter paradigm, we use the results of human recognition as the baseline. The analysis demonstrates that GRL achieves higher accuracy and F1 scores, and even surpasses the average human recognition level by a substantial margin. Overall, our primary contributions can be summarized as follows: - We collected and analyzed 144 survey papers about LLMs and their metadata.1 - We propose a new taxonomy for categorizing the survey papers, which will be helpful for the research community, particularly newcomers and multidisciplinary research teams. - Extensive experiments demonstrate that graph representation learning on co-category graph structure can effectively classify the papers and substantially outperform the language models and average human recognition level on a relatively small and class-imbalanced dataset with high textual similarity. - Our results also reveal the potential of finetuning pre-trained language models using weak labels.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
69e2e066-d888-4378-a5cf-0a638d0f744a
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 2 Related Work Taxonomy Classification Conventional taxonomy classification is a subset of Automatic Taxonomy Generation (ATG), which aims to generate taxonomy for a corpus (Krishnapuram and Kummamuru, 2003). The main challenge in ATG is to cluster the unlabeled topics into different hierarchical structures. Thus, most existing methods in ATG are clustering-based methods. Zamir and Etzioni (1998) design a mechanism, Grouper, that dynamically groups and labels the search results. Vaithyanathan and Dom (1999) propose a model to generate hierarchical clusters. Lawrie et al. (2001) discover the relationship among words to generate concept hierarchies. Within these methods, a subset, called co-clustering, clusters keywords and documents simultaneously (Frigui and Nasraoui, 2002; Kummamuru et al., 2003). Different from ATG, in this study, we classify survey papers into corresponding categories in the proposed taxonomy on relatively small and class-imbalanced datasets, whose text content contains similar terminologies. Graph Representation Learning Graph representation learning (GRL) is a powerful approach for learning the representation in graph-structure data (Zhou et al., 2020), whereas most recent works achieve this goal using Graph Neural Networks (GNNs) (Veliˇckovi´c et al., 2018; Xu et al., 2018). Bruna et al. (2013) first introduce a significant advancement in convolution operations applied to graph data using both spatial method and spectral methods. To improve the efficiency of the eigendecomposition of the graph Laplacian matrix, Defferrard et al. (2016) approximate spectral filters by using K-order Chebyshev polynomial. Kipf and Welling (2016) simplify graph convolutions to a first-order polynomial while yielding superior performance in semi-supervised learning. Hamilton et al. (2017) propose an inductive-learning approach that aggregates node features from corresponding fixed-size local neighbors. These GNNs have demonstrated exceptional performance in GRL, underscoring their significance in advancing this field.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
75810898-4293-4bb7-ad96-e0de0329a5db
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 3 Methodology In this section, we first introduce the procedure of data collection and then explore the metadata. We further explain the process of constructing three types of attributed graphs and how we learn graph representation via graph neural networks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e3d20096-c93f-42a0-9ef4-b7a8e5d3f932
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 3.1 Data Collection And Exploration We scraped the metadata of survey papers about LLMs from arXiv and further manually supplemented the metadata from Google Scholar and the ACL anthology. The papers range from July 2021 to January 2024. Given these survey papers, we designed a taxonomy and assigned each paper to a corresponding category within the taxonomy. Our motivation is that a reasonable taxonomy can provide a clear hierarchy of concepts for readers to better understand the relationship among a large number of survey papers. Though survey papers can be taxonomized differently, we noticed two broad categories: *applications* and *model techniques*. The applications category further sub-divides into specific domains of focus (e.g., education or science), whereas *model techniques* further sub-divides into ways of effecting models (e.g., fine-tuning). We visualize our proposed taxonomy and highlight fourteen classes, i.e., the leaf nodes, in Figure 2. The total classes in the labels are sixteen, including *comprehensive* and *others* (not shown in the figure). To better understand the distribution of the classes, we present the class distribution in Figure 3. The distribution indicates that the class is extremely imbalanced, introducing a challenge to the taxonomy classification task. After visualizing the proposed taxonomy, we further explain the motivation for proposing a new taxonomy instead of using the arXiv categories. In Figure 4, we present the distribution of survey papers across different arXiv categories. Top-2 frequent categories are cs.CL (Computation and Language), and cs.AI (Artificial Intelligence), which means that most authors choose these two categories for their works. However, these choices cannot help readers to better distinguish the survey papers. For example, papers related to model techniques are indistinguishable in arXiv categories. Thus, designing a new taxonomy is an essential step in this study. We also present the word frequency in Figure 5 to show which words have been frequently used in abstracts. These distributions suggest that the abstracts of these papers contain many similar terms, which increases the difficulty of text classification. | Attributes | |------------------------| | Taxonomy | | Proposed taxonomy | | Title
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a70740d8-3b0e-4f1d-8922-0399e86bae46
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 3.1 Data Collection And Exploration is an essential step in this study. We also present the word frequency in Figure 5 to show which words have been frequently used in abstracts. These distributions suggest that the abstracts of these papers contain many similar terms, which increases the difficulty of text classification. | Attributes | |------------------------| | Taxonomy | | Proposed taxonomy | | Title | | Paper title | | Authors | | Lists of author's name | | Release Date | | First released date | | Links | | Links of papers | | Paper ID | | The arXiv paper ID | | Categories | | The arXiv category | | Summary | | Abstract of papers | Overall, we present the data description of their attributes in Table 1. After designing the taxonomy and building the dataset, we explain how we classify documents into the taxonomy categories in the following section.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
716f6016-79c9-4164-8d97-13bc39fd7c94
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 3.2 Building Attributed Graphs The goal of building the graphs is to utilize the graph structure information to classify the taxonomy. Before building the graphs, We first define the attributed graphs as follows: Definition 1 An attributed graph G is a topolog- ical structure that represents the relationships among vertices associated with attributes. G con- sists of a set of vertices V = {v1, v2, ..., vN} and edges E ⊆ V × V, where N is the number of ver- tices in G. Given the Definition 1, we further define the ma- trix representation of an attributed graph as follows: Definition 2 Given an attributed graph G(V, E), the topological relationship among vertices can be represented by a symmetric adjacency matrix A ∈ RN×N. Each vertex contains an attribute vector, a.k.a., a feature vector. All feature vectors constitute a feature matrix X ∈ RN×d, where d is the number of features for each vertex. Thus, the matrix representation of an attributed graph can be formulated as G(A, X). Based on the above definitions, we build the graph by creating the term frequency-inverse docu- ment frequency (TF-IDF) feature matrices for both title and summary (i.e., abstract) columns, where the term frequency denotes the word frequency in the document, and inverse document frequency de- notes the log-scaled inverse fraction of the number of documents containing the word. TF-IDF matrix is commonly used for text classification tasks be- cause it helps capture the distinctive words that can indicate specific classes (Yao et al., 2019). After es- tablishing the TF-IDF matrices, we apply one-hot encoding on the arXiv's categories and then com- bine three matrices along the feature dimension to build the feature matrix X. To leverage the topological information among vertices, we proceed to construct the graph struc- tures to connect the attribute vectors. In this study, we are interested in three types of graphs: text graph, co-author graph, and co-category graph. We explain each type as follows. Text Graph We follow the same settings as
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
26cfd5c7-b9e0-43d0-b59c-0e525a57d3cc
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 3.2 Building Attributed Graphs tablishing the TF-IDF matrices, we apply one-hot encoding on the arXiv's categories and then com- bine three matrices along the feature dimension to build the feature matrix X. To leverage the topological information among vertices, we proceed to construct the graph struc- tures to connect the attribute vectors. In this study, we are interested in three types of graphs: text graph, co-author graph, and co-category graph. We explain each type as follows. Text Graph We follow the same settings as TextGCN (Yao et al., 2019) to build the text graph. Specifically, the edges of the text graph are built based on word occurrence (paper-word edges) in the paper's text data, including both title and sum- mary, and word co-occurrence (word-word edges) in the whole text corpus. To obtain the global word co-occurrence information, we slide a fixed-size window on all papers' text data. Moreover, we calculate the edge weight between a paper vertex and a word vertex using the TF-IDF value of the word in the paper and calculate the edge weight be- tween two-word vertices using point-wise mutual information (PMI), a popular metric to measure the associations between two words. Note that in the text graph, we don't use the above feature matrix because only paper vertices contain attribute vectors. To retain consistency, we set all values in the feature matrix as one. For the same reason, only the paper vertices are assigned labels, whereas all word vertices are labeled as a new class, which is not touched during the training or testing phase. Co-author Graph In the co-author graph, we introduce an edge connecting two vertices (papers) if they share at least one common author. Co-category Graph In the co-category graph, an edge is added between two vertices with at least one common arXiv category. In the co-authorship and co-category graphs, each vertex is assigned one class (taxonomy) as the label. Note that in this study all edges are undirected.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ba95957d-bb0b-4e86-88dd-dde69ee0c498
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 3.3 Taxonomy Classification Via Graph Representation Learning Given the well-built attributed graphs G(A, X), we aim to investigate whether graph representation learning (GRL) using graph neural networks (GNNs) can help classify survey papers into the taxonomy. Before feeding the matrix representation, A and X, of the attributed graphs G into GNNs, we first preprocess the adjacency matrix A as follows: $$\hat{\bf A}=\hat{\bf D}^{-\frac{1}{2}}\hat{\bf A}\hat{\bf D}^{-\frac{1}{2}},\tag{1}$$ where $\hat{\bf A}={\bf A}+I_{N},\hat{\bf D}={\bf D}+I_{N}$. $I_{N}$ is an identity matrix. ${\bf D}_{i,i}=\sum_{j}{\bf A}_{i,j}$ is a diagonal degree matrix. After preprocessing, we utilize GNNs to learn graph representation. The layer-wise message-passing mechanism of GNNs can be generally formulated as follows: $$f_{{\bf W}^{(l)}}\left(\hat{\bf A},{\bf H}^{(l)}\right)=\sigma\left(\hat{\bf A}{\bf H}^{(l)}{\bf W}^{(l)}\right),\tag{2}$$ where ${\bf H}^{(l)}$ is a node hidden representation in the $l$-th layer. The dimension of ${\bf H}^{(l)}$ in the input layer, middle layer, and output layer is the number of features d, hidden units h, and classes K, respectively. H(0) = X. W(l) is the weight matrix in the l-th layer. σ denotes a non-linear activation function, such as ReLU. In general node classification tasks, GNNs are trained with ground-truth labels Y ∈ RN×1. In this study, we build the ground-truth labels based on our proposed taxonomy. To simplify the problem, each paper is assigned one primary category as the label, even if the paper sometimes may belong to more than one category. During training, we optimize GNNs with cross-
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5a3d5235-f252-4b43-8d11-0b3d3e8b3bc7
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 3.3 Taxonomy Classification Via Graph Representation Learning K, respectively. H(0) = X. W(l) is the weight matrix in the l-th layer. σ denotes a non-linear activation function, such as ReLU. In general node classification tasks, GNNs are trained with ground-truth labels Y ∈ RN×1. In this study, we build the ground-truth labels based on our proposed taxonomy. To simplify the problem, each paper is assigned one primary category as the label, even if the paper sometimes may belong to more than one category. During training, we optimize GNNs with cross-entropy. In brief, we address the Taxonomy Classification problem via GRL approaches in this study and formally state the problem as follows: Problem 1 After building an attributed graph G(ˆA, X) and the ground-truth labels Y based on the survey metadata, we train a graph neural net- work (GNN) on the train data and evaluate the tax- onomy classification performance on the test data. Our goal is to design a method to better understand (classify) the taxonomy of the survey papers.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3375d4a3-8259-4b0a-979c-c74bc1bbff20
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4 Experiment In this section, we evaluate the graph representation learning (GRL)'s effectiveness compared with two paradigms using language models. | | Subsets | Graphs | |-------------|-----------|----------| | |V| | |E| | | | | F | | | | | | | | | | C | | | | | | | | | Data | | | | Nov | 23 | | | Text | 737 | 94,943 | | Co-author | 112 | 204 | | Co-category | 112 | 4,908 | | Data | | | | Jan | 24 | | | Text | 951 | 137,709 | | Co-author | 144 | 332 | | Co-category | 144 | 8,140 | | Data |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7ab091c2-ca8a-4f0b-becd-c5ff75ec6591
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4 Experiment | | | Jan | 24 | | | Text | 951 | 137,709 | | Co-author | 144 | 332 | | Co-category | 144 | 8,140 | | Data | | | | subset | | | | Text | 905 | 128,575 | | Co-author | 134 | 302 | | Co-category | 134 | 6,964 | Experimental Settings To examine the generalization of our method on various graph structures, we investigate three types of attributed graphs: text graphs, co-author graphs, and cocategory graphs, and compare the classification performance of GRL with that of fine-tuning pretrained language models across three subsets of our data. Both DataNov23 and DataJan24 contain survey papers collected at the end of corresponding months (November 2023 and January 2024). DataJan24 includes a new category, Hardware Architecture. We further construct the third subset Data*subset* by removing some proposed categories with fewer instances in DataJan24; these categories are Law, Finance, Education, Hardware Architecture, and *Others*. The motivation for constructing three subsets is to validate the generalization of our method across different subsets since the classification performance may significantly change on small datasets. Also, new categories may emerge at any period because research on LLMs is developing rapidly, and so are related survey papers. For example, a new category, Hardware Architecture, emerges in DataJan24. The change of categories may affect the performance as well. Therefore, we investigate our method on three subsets that contain different categories. The statistics of our dataset and corresponding attributed graphs are presented in Table 2. Recall that the
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
333f93c6-c7bf-4273-b4f0-b730a39cf3d0
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4 Experiment Education, Hardware Architecture, and *Others*. The motivation for constructing three subsets is to validate the generalization of our method across different subsets since the classification performance may significantly change on small datasets. Also, new categories may emerge at any period because research on LLMs is developing rapidly, and so are related survey papers. For example, a new category, Hardware Architecture, emerges in DataJan24. The change of categories may affect the performance as well. Therefore, we investigate our method on three subsets that contain different categories. The statistics of our dataset and corresponding attributed graphs are presented in Table 2. Recall that the text graph consists of paper vertices and word vertices, and thus contains one additional class because all word vertices are labeled as a new class, which is not touched during the training or testing phase. To evaluate our model, we split the train, validation, and test data as 60%, 20%, and 20%. Due to the potential for random splits to result in an easier task for our model, we ran the experiments five times using random seed IDs from 0 to 4 and reported the mean values with corresponding standard deviations, mean (std). We evaluate the classification performance by accuracy and weighted f1 score. Accuracy is a common metric on classification tasks, whereas the weighted f1 score provides a balanced measure of the class-imbalanced dataset.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
16990bf0-3854-4577-98c7-e6db10724440
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.1 Leveraging Graph Structure Information For Taxonomy Classification We investigate whether leveraging the graph structure information can help better classify the papers to their corresponding categories in the proposed taxonomy. In this experiment, we construct the attributed graphs based on the text data (including the title and summary) and the relationship of the co-authorship and co-category. To examine the generalization of GRL, we employ GCN (Kipf and Welling, 2016) as a backbone GNN on various graph structures across three subsets. According to Table 3, GNNs fail to learn graph representation on both the text graph and the co-author graph. For DataNov23 DataJan24 Datasubset Accuracy Weighted-F1 Accuracy Weighted-F1 Accuracy Weighted-F1 Text 20.91 (5.45) 14.20 (4.41) 17.86 (7.14) 16.31 (4.49) 23.08 (4.87) 18.82 (1.50) Co-author 33.04 (8.06) 33.06 (8.69) 20.00 (8.56) 19.24 (8.79) 29.63 (7.03) 29.24 (6.02) Co-category (All) 63.48 (18.36) 62.82 (16.96) 75.17 (5.52) 74.60 (4.81) 79.26 (6.87) 77.88 (7.52) Co-category (Rm cs.CL) 70.43 (9.28) 68.46 (9.63) 67.59 (15.36) 65.81 (17.03) 76.30 (12.31) 73.83 (14.53) Co-category (Rm cs.AI) 73.91 (18.03) 72.41 (18.28) 75.86 (8.99) 75.79 (9.62) 77.04 (3.63) 74.15 (4.11) Co-category (Rm cs.CL, cs.AI) 26.09 (10.65) 20.19 (10.56) 37.93 (8.45) 35.97 (7.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
044a902d-1ab4-44ae-b7a5-940c2dfa2492
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.1 Leveraging Graph Structure Information For Taxonomy Classification 76.30 (12.31) 73.83 (14.53) Co-category (Rm cs.AI) 73.91 (18.03) 72.41 (18.28) 75.86 (8.99) 75.79 (9.62) 77.04 (3.63) 74.15 (4.11) Co-category (Rm cs.CL, cs.AI) 26.09 (10.65) 20.19 (10.56) 37.93 (8.45) 35.97 (7.92) 49.63 (7.63) 47.32 (7.59) Co-category (Rm cs.IR) 63.48 (18.36) 62.82 (16.96) 75.17 (5.52) 74.60 (4.81) 79.26 (6.87) 77.88 (7.52) Co-category (Rm cs.RO) 63.48 (18.36) 62.82 (16.96) 75.17 (5.52) 74.60 (4.81) 79.26 (6.87) 77.88 (7.52) Co-category (Rm cs.SE) 65.22 (11.00) 63.21 (10.37) 74.48 (8.33) 75.10 (6.77) 82.96 (6.02) 82.93 (6.22) Co-category (Rm cs.IR, cs.RO) 63.48 (18.36) 62.82 (16.96) 75.17 (5.52) 74.60 (4.81) 79.26 (6.87) 77.88 (7.52) Co-category (Rm cs.IR, cs.SE) 65.22 (11.00) 63.21 (10.37) 74.48 (8.33) 75.10 (6.77) 82.96 (6.02) 82.93 (6.22) Co-category (Rm cs.RO, cs.SE) 65.22 (11.00) 63.21 (10.37) 74.48
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7f478a3d-39e9-4c12-8c9c-93b782a7e517
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.1 Leveraging Graph Structure Information For Taxonomy Classification 60 (4.81) 79.26 (6.87) 77.88 (7.52) Co-category (Rm cs.IR, cs.SE) 65.22 (11.00) 63.21 (10.37) 74.48 (8.33) 75.10 (6.77) 82.96 (6.02) 82.93 (6.22) Co-category (Rm cs.RO, cs.SE) 65.22 (11.00) 63.21 (10.37) 74.48 (8.33) 75.10 (6.77) 82.96 (6.02) 82.93 (6.22) Co-category (Rm cs.IR, cs.RO, **cs.SE**) 65.22 (11.00) 63.21 (10.37) 74.48 (8.33) 75.10 (6.77) 82.96 (6.02) 82.93 (6.22) the text graph, we argue that the degradation may be caused by excessively similar words in the summary of survey papers. When constructing the text graph, these word vertices connect with many paper vertices, resulting in the paper vertices being less distinguishable. For the co-author graph, we conjecture that it is challenging to categorize papers solely based on the sparse co-authorship in this dataset. Furthermore, we observe that some co-authorships come from a common mentor in the same lab whereas two first authors work on the survey papers in two distinct categories. These reasons weaken the effectiveness of using graph structure information. GNNs, in contrast, are very reliable (evaluated by both accuracy and weighted F1 score) in most co-category graphs. Ablation Analysis We further examine the graph structures of co-category graphs by conducting ablation studies. First, according to Figure 4, most papers are assigned as cs.CL and cs.AI in the arXiv categories. Thus, we study how the categories, cs.CL and cs.AI, affect the performance by muting these two categories in a combinatorial manner. In Table 3, we observe that GNNs can maintain a comparable performance after removing either cs.CL or cs.AI. However, the performance dramatically drops after removing both categories.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9bc07415-a2fb-4c03-abff-92bd03332bfe
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.1 Leveraging Graph Structure Information For Taxonomy Classification most co-category graphs. Ablation Analysis We further examine the graph structures of co-category graphs by conducting ablation studies. First, according to Figure 4, most papers are assigned as cs.CL and cs.AI in the arXiv categories. Thus, we study how the categories, cs.CL and cs.AI, affect the performance by muting these two categories in a combinatorial manner. In Table 3, we observe that GNNs can maintain a comparable performance after removing either cs.CL or cs.AI. However, the performance dramatically drops after removing both categories. This is possible since most node connections are significantly sparsified after these two categories are removed. Even though both cs.CL and cs.AI do not directly map to the existing classes, either one can connect the nodes and further strengthen the message-passing in GNNs, allowing GNNs to learn better node representations. We visualize co-category graphs in DataJan24 in Figure 6. The visualization indicates that most nodes are clustered well even if we remove the category either cs.CL or cs.AI. However, after removing these two categories simultaneously, we observe that node classifications gradually become disordered and several nodes are then isolated. This visualization illustrates the effectiveness of GRL. We further visualize GCNs' hidden representation on the above co-category graphs in DataJan24 in Figure 7. The figures show that the nodes are well-classified in the hidden space even if either the category cs.CL or cs.AI is removed. However, the distribution of nodes tends to become chaotic when both of these two categories are removed simultaneously, shown in Table 3. For completeness, we conducted another ablation study to examine how the categories, cs.IR, | (a) All Categories | |----------------------| | cs.CL | | (c) Removed | | cs.AI | | (d) Removed | | cs.CL | | , |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ac59e667-c406-4282-b433-3345cd585dcd
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.1 Leveraging Graph Structure Information For Taxonomy Classification Categories | |----------------------| | cs.CL | | (c) Removed | | cs.AI | | (d) Removed | | cs.CL | | , | | cs.AI | DataNov23 DataJan24 Datasubset Accuracy Weighted-F1 Accuracy Weighted-F1 Accuracy Weighted-F1 BERT (Kenton and Toutanova, **2019**) 30.43 (18.45) 25.70 (19.91) 43.45 (18.84) 41.50 (22.31) 58.74 (6.87) 57.51 (6.94) RoBERTa (Liu et al., 2019) 41.74 (20.32) 39.17 (22.84) 35.86 (17.53) 27.23 (23.67) 25.93 (12.17) 17.02 (15.16) DistilBERT (Sanh et al., 2019) 57.39 (8.87) 55.59 (10.66) 53.10 (2.76) 52.07 (4.47) 59.26 (7.41) 58.15 (8.82) XLNet (Yang et al., 2019) 25.22 (16.82) 21.59 (20.54) 27.59 (14.30) 21.59 (15.39) 28.52 (9.37) 21.51 (9.65) Electra (Clark et al., 2019) 23.04 (4.76) 19.06 (4.06) 44.83 (7.23) 42.01 (8.39) 20.01 (6.87) 12.03 (8.45) Albert (Lan et
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b52c6704-937b-49a4-85e6-35b8ae87189f
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.1 Leveraging Graph Structure Information For Taxonomy Classification (Yang et al., 2019) 25.22 (16.82) 21.59 (20.54) 27.59 (14.30) 21.59 (15.39) 28.52 (9.37) 21.51 (9.65) Electra (Clark et al., 2019) 23.04 (4.76) 19.06 (4.06) 44.83 (7.23) 42.01 (8.39) 20.01 (6.87) 12.03 (8.45) Albert (Lan et al., 2019) 11.30 (8.06) 4.85 (7.21) 15.17 (4.68) 5.14 (2.87) 20.74 (9.83) 11.41 (11.15) BART (Lewis et al., 2020) 51.30 (17.48) 50.30 (17.62) 51.72 (3.08) 50.62 (2.79) 58.25 (8.11) 57.68 (8.90) DeBERTa (He et al., 2020) 24.78 (8.06) 19.61 (10.36) 26.21 (11.03) 20.92 (14.25) 25.93 (10.73) 24.30 (10.52) Llama2 (Touvron et al., 2023) 14.48 (8.72) 4.77 (4.35) 19.22 (5.90) 6.03 (4.21) 23.45 (8.72) 12.59 (7.23) cs.RO, and cs.SE, affect the classification performance as their names are similar to that of some classes in our proposed taxonomy (recall that our proposed taxonomy is not based on the arXiv categories). According to Table 3, the classification performances are well-maintained no matter which category is removed, whereas removing cs.SE does slightly change the results (highlighted by gray color). We argue that the results are reasonable since these removals only drop a small number of edges and don't break the topological relationships in the graph. Overall, these studies verify that leveraging the graph structure information in co
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3d2ccd53-8568-40fb-903a-a7d9dd9ff580
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.1 Leveraging Graph Structure Information For Taxonomy Classification .RO, and cs.SE, affect the classification performance as their names are similar to that of some classes in our proposed taxonomy (recall that our proposed taxonomy is not based on the arXiv categories). According to Table 3, the classification performances are well-maintained no matter which category is removed, whereas removing cs.SE does slightly change the results (highlighted by gray color). We argue that the results are reasonable since these removals only drop a small number of edges and don't break the topological relationships in the graph. Overall, these studies verify that leveraging the graph structure information in co-category graphs can positively contribute to the taxonomy classification.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e3ea3d09-0c16-4f64-ba6f-1d7a5b4c46f2
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.2 Fine-Tuning Pre-Trained Language Models After verifying the effectiveness of GRL, we continue to investigate whether GRL can transcend fine-tuning pre-trained language models on the text data across three subsets in the taxonomy classification task. To preprocess the text data, we follow the same setup as the cleaning process for text graphs. In this fine-tuning paradigm, we use various transformer-based (Vaswani et al., 2017) pre-trained language models as competing models, such as BERT (Kenton and Toutanova, 2019), which learns bidirectional representations and significantly enhances performance across a wide range of contextual understanding tasks. The results in Table 3 and Table 4 gave us an affirmative answer to the superiority of GRL. In Table 4, we further observe that medium-size language models, such as DistilBERT (Sanh et al., 2019), work better on smaller text data. However, the performance may dramatically drop when the model size is too small, such as Albert (Lan et al., 2019) or too large, such as Llama2 (Touvron et al., 2023). We conjecture that a smaller pre-trained model may be more sensitive to the domain shift issue (our dataset has distinct class distributions compared to that of the dataset used to pre-train the language models), whereas a large pre-trained model may suffer overfitting issues when it is fine-tuned on a smaller text data. Fine-tuning with Weak Labels The above experiments confirmed that smaller ("weaker") GNNs can surpass larger ("stronger") language models in the taxonomy classification task. Recently, Burns et al. (2023). verified that training stronger models with pseudo labels, a.k.a. weak labels, generated by weaker models can enable the stronger models to achieve comparable performance as closely as those trained with ground-truth labels. In this experiment, we first generate weak labels by GCN on co-category graphs and then fine-tune pre-trained language models with weak labels. We present the results on DataJan24 in Figure 8 as an example. The results indicate that performance achieved through training with weak labels can surpass that of training with ground-truth labels. One possible reason is that training the model using noisy labels with a low noise ratio can be equivalent to a kind of regularization, improving the classification results (Zhuang and Al Hasan, 2022).2 This experiment demonstrates that leveraging weak labels generated by smaller models may effectively enhance the
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b495ddb6-0d58-467d-8136-0bfefd83ec35
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.2 Fine-Tuning Pre-Trained Language Models labels. In this experiment, we first generate weak labels by GCN on co-category graphs and then fine-tune pre-trained language models with weak labels. We present the results on DataJan24 in Figure 8 as an example. The results indicate that performance achieved through training with weak labels can surpass that of training with ground-truth labels. One possible reason is that training the model using noisy labels with a low noise ratio can be equivalent to a kind of regularization, improving the classification results (Zhuang and Al Hasan, 2022).2 This experiment demonstrates that leveraging weak labels generated by smaller models may effectively enhance the performance of larger models. This is one of the applications related to "weak-tostrong generalization" (Burns et al., 2023).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cfe7f874-e8b6-4358-bcd5-78581cf0a715
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.3 Llm Zero-Shot/Few-Shot Classification And Human Evaluation | | Accuracy | Weighted-F1 | |--------------------|---------------|---------------| | Human | 58.73 (19.16) | 59.50 (19.13) | | Claude w.o. hints | | | | 11.61 (1.27) | 12.66 (0.14) | | | Claude w. hints | | | | 10.27 (3.15) | 12.81 (2.10) | | | GPT 3.5 w.o. hints | | | | 47.32 (3.25) | 43.21 (4.33) | | | GPT 3.5 w. hints | 53.57 (2.81) | 53.16 (3.13) | | GPT 4 w.o. hints | | | | 29.76 (7.22) | 26.91 (9.66) | | | GPT 4 w. hints | | | | 33.04 (5.57) | 27.78 (7.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
23904428-bf12-4f49-83c3-bd909c5ba9b8
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 4.3 Llm Zero-Shot/Few-Shot Classification And Human Evaluation | | | 29.76 (7.22) | 26.91 (9.66) | | | GPT 4 w. hints | | | | 33.04 (5.57) | 27.78 (7.76) | | Major CS DS Security Math Chemistry #Students 9 4 2 2 1 In this experiment, we evaluate zero-shot/fewshot classification capabilities of LLMs Claude (Bai et al., 2022), GPT 3.5 (Brown et al., 2020), and GPT 4 (Achiam et al., 2023), on the text data, which contains both title and summary, on DataNov23 as an example. We also compare the results with human participants. We recruited 18 students across five different majors in a graduatelevel course. The number of students in each major is shown in Table 6. Each participant was given the titles and abstracts of survey papers and was asked to assign a category to each paper from our taxonomy. We present the mean value with the corresponding standard deviation in Table 5. For the LLMs, we ran the experiments five times. The standard deviation in human recognition is relatively large as some students do not have a strong technical background so they perform worse in this test. Among the LLMs, GPT 3.5 outperforms the other two models given that all models have not seen the data before (zero-shot). We further provide some hints to the models before classification (few-shot). For example, we release the keywords of the class "Trustworthy" to the models before classification. In this setting, both GPT 3.5 and GPT 4 can achieve higher accuracy and a weighted F1 score after obtaining some hints. In brief, GRL can outperform all three LLMs and human recognition, whereas these LLMs couldn't surmount human recognition, which reveals that LLMs still have much room to improve in taxonomy classification.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
75eadf73-9e43-4767-a0db-b52ce43354ba
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## 5 Conclusion In this work, we aim to develop a method to automatically assign survey papers about Large Language Models (LLMs) to a taxonomy. To achieve this goal, we first collected the metadata of 144 LLM survey papers and proposed a new taxonomy for these papers. We further explored three paradigms to classify survey papers into the categories in the proposed taxonomy. After investigating three types of attributed graphs, we observed that leveraging graph structure information on co-category graphs can significantly help the taxonomy classification. Furthermore, our analysis validates that graph representation learning outperforms pre-trained language models' finetuning, zero-shot/few-shot classifications using LLMs, and even surpasses an average human recognition level. Last but not least, our experiments indicate that fine-tuning pre-trained language models using weak labels, which are generated by a weaker model, such as GCN, can be more effective than using ground-truth labels, revealing the potential for weak-to-strong generalization in the taxonomy classification task. Limitations & Future Work Constructing a graph structure may encounter certain constraints. For instance, we build co-category graphs based on the arXiv categories. When papers come from distinct fields, such as biology, physics, and computer science, the graph structure may be very sparse, weakening the effectiveness of GRL. In the future, our primary motivation extended from this study is to tailor GPT-based applications to assist readers in understanding survey papers more effectively. We also plan on further exploring the weak-to-strong generalization which could potentially have many important applications.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
098ec8f3-e5fd-4d0a-917c-c0388dc77b1d
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## A Appendix In the appendix, we present the GNN and pretrained language models' hyper-parameters and the hardware and software. We also include the additional comparison results about fine-tuning using weak labels and additional visualization of cocategory graphs. Hyper-parameters and Settings We employ a two-layer GCN (Kipf and Welling, 2016) with 200 hidden units and a ReLU activation function as the backbone GNN to examine the effectiveness of GRL. The GNN is trained by the Adam optimizer with a learning rate, 1 × 10−2 for both co-author graphs and co-category graphs and 2×10−2 for text graphs, and converged within 500 training epochs on all subsets. The dropout rate is 0.5. | | | | Language Models | Model Size | |--------------|----------------------|----|-------------------|--------------| | BERT ( | Kenton and Toutanova | , | 2019 | ) | | 109.49M | | | | | | RoBERTa ( | Liu et al. | , | 2019 | ) | | 124.66M | | | | | | DistilBERT ( | Sanh et al. | , |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1f5d52a4-ce79-472d-8211-a820f24e5714
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## A Appendix 2019 | ) | | 124.66M | | | | | | DistilBERT ( | Sanh et al. | , | 2019 | ) | | 66.97M | | | | | | XLNet ( | Yang et al. | , | 2019 | ) | | 117.32M | | | | | | Electra ( | Clark et al. | , | 2019 | ) | | 109.49M | | | | | | Albert ( | Lan et al. | , | 2019 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a5578d46-fda4-43ce-9d26-410ce4a2623e
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## A Appendix 2019 | ) | | 109.49M | | | | | | Albert ( | Lan et al. | , | 2019 | ) | | 11.70M | | | | | | BART ( | Lewis et al. | , | 2020 | ) | | 140.02M | | | | | | DeBERTa ( | He et al. | , | 2020 | ) | | 139.20M | | | | | | Llama2 ( | Touvron et al. | , | 2023 | )
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6689d15d-b322-47aa-87b7-4522a12f8e79
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## A Appendix | | 139.20M | | | | | | Llama2 ( | Touvron et al. | , | 2023 | ) | | 6.61B | | | | | We fine-tune the pre-trained language models using the Adam optimizer with a 1×10−4 learning rate. We chose the batch size of 8 for the Llama2 and fixed the batch size of 16 for the rest of the models. We implement the pre-trained language models using HuggingFace packages (we choose the base version for all models) and report the model size in Table 7. All models are tuned with 30 epochs. Hardware and Software The experiment is conducted on a server with the following settings: - Operating System: Ubuntu 22.04.3 LTS - CPU: Intel Xeon w5-3433 @ 4.20 GHz - GPU: NVIDIA RTX A6000 48GB - Software: Python 3.11, PyTorch 2.1, Hugging- Face 4.31, dgl 1.1.2+cu118. Computational Budgets Based on the above computing infrastructure and settings, computational budgets in our experiments are described as follows. The experiment presented in Table 3 can be reproduced within one hour. The experiment displayed in Table 4 may take 93 hours to complete. Due to limited GPU memory, we implemented Llama2 using the CPU. This consumes around 90 hours in total. The experiment shown in Table 5 (excluding human recognition) can be finished in one hour. Additional Visualization of Co-category Graphs Besides visualizing four graph structures
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4f695017-9679-427a-bb42-1599c15bfad4
# Understanding Survey Paper Taxonomy About Large Language Models Via Graph Representation Learning ## A Appendix Face 4.31, dgl 1.1.2+cu118. Computational Budgets Based on the above computing infrastructure and settings, computational budgets in our experiments are described as follows. The experiment presented in Table 3 can be reproduced within one hour. The experiment displayed in Table 4 may take 93 hours to complete. Due to limited GPU memory, we implemented Llama2 using the CPU. This consumes around 90 hours in total. The experiment shown in Table 5 (excluding human recognition) can be finished in one hour. Additional Visualization of Co-category Graphs Besides visualizing four graph structures in DataJan24 in Figure 6, we additionally present the visualization of four corresponding co-category graphs in both DataNov23 and Data*subset* in Figure 9. The visualization verifies the generalization of GRL across three subsets. Additional Comparison of Fine-tuning Using Weak Labels Besides the results in Figure 8, we supplement the comparisons on both DataNov23 and Data*subset* in Figure 10. The comparisons across nice pre-trained language models further validate the effectiveness of fine-tuning using weak labels. Ethical and Broader Impacts We confirm that we fulfill the author's responsibilities and address the potential ethical issues. In this work, we aim to help researchers quickly and better understand a new research field. Many researchers in academia or industry may potentially benefit from our work. Statement of Data Privacy Our dataset contains the authors' names in each paper. This information is publicly available so the collection process doesn't infringe on personal privacy. Disclaimer Regarding Human Subjects Results In Table 5, we include partial results with human subjects. We already obtained approval from the Institutional Review Board (IRB). The protocol number is IRB24-056. We recruited volunteers from a graduate-level course. Before the assessment, we have disclaimed the potential risk (our assessment has no potential risk) and got consent from participants.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10409v1.md", "file_path": "paper_data/2402.10409v1.md", "file_size": 43907, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ddfde6e5-c000-405a-926b-f84827ad5393
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data Yinya Huang1 Xiaohan Lin2 Zhengying Liu3† Qingxing Cao2† Huajian Xin2 Haiming Wang2 Zhenguo Li3 Linqi Song1† Xiaodan Liang2,4,5† 1City University of Hong Kong 2Shenzhen Campus of Sun Yat-sen University 3Huawei Noah's Ark Lab 4DarkMatter AI Research 5MBZUAI yinya.huang@hotmail.com, linxh55@mail2.sysu.edu.cn, {liuzhengying2, Li.Zhenguo}@huawei.com, caoqx8@sysu.edu.cn, linqi.song@cityu.edu.hk, xdliang328@gmail.com
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b6f92bda-1543-481e-b547-bb46d0e330c6
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## Abstract Recent large language models (LLMs) have witnessed significant advancement in various tasks, including mathematical reasoning and theorem proving. As these two tasks require strict and formal multi-step inference, they are appealing domains for exploring the reasoning ability of LLMs but still face important challenges. Previous studies such as Chain-of-Thought (CoT) have revealed the effectiveness of intermediate steps guidance. However, such step-wise annotation requires heavy labor, leading to insufficient training steps for current benchmarks. To fill this gap, this work introduces MUSTARD, a data generation framework that masters uniform synthesis of theorem and proof data of high quality and diversity. MUSTARD synthesizes data in three stages: (1) It samples a few mathematical concept seeds as the problem category. (2) Then, it prompts a generative language model with the sampled concepts to obtain both the problems and their step-wise formal solutions. (3) Lastly, the framework utilizes a proof assistant (e.g., Lean Prover) to filter the valid proofs. With the proposed MUSTARD, we present a theorem-and-proof benchmark MUSTARDSAUCE with 5,866 valid data points. Each data point contains an informal statement, an informal proof, and a translated formal proof that passes the prover validation. We perform extensive analysis and demonstrate that MUSTARD generates validated high-quality step-by-step data. We further apply the MUSTARDSAUCE for fine-tuning smaller language models. The fine-tuned Llama 2-7B achieves a 15.41% average relative performance gain in automated theorem proving, and 8.18% in math word problems.1
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e51200bd-53ed-4ed8-aae7-6ef64d381870
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 1 Introduction Large language models (LLMs) (OpenAI, 2023; 2022) have shown promising reasoning capabilities in various domains, including math word problem and theorem proving (Cobbe et al., 2021; Hendrycks et al., 2021; Zheng et al., 2022; Wu et al., 2021). These two tasks, which require strictly and successively multi-step inference, have become appeal domains to evaluate and develop LLMs' ability in complex reasoning. Recent works progress LLMs in solving math problems mainly through two techniques. The first is the chain-of-thoughts (CoT) prompting (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2023b), which provides step-by-step solutions to the LLMs. The second is to leverage the LLMs' ability in code generation to generate formalized languages and utilize external solvers to obtain strict inference results (Wu et al., 2022; Jiang et al., 2023; Polu & Sutskever, 2020; Han et al., 2022; Polu et al., 2023). Both techniques rely on step-wise annotation to improve LLMs' performance and interpretability on the math problem. Correct intermediate steps are crucial for LLMs to perform complex reasoning. However, highquality step-wise annotations are hard to obtain, and Figure 1 demonstrates a few representative works. Previous works such as miniF2F (Zheng et al., 2022) resorts to manual annotation and validation to obtain high-quality step-wise labels. However, manual annotation requires heavy labor of knowledgeable experts, resulting in an extremely small-scale dataset. Manual checking also does not guarantee the correctness of data as the labelers would make mistakes in labeling. On the other hand, generating data with rule-based checking such as ROSCOE (Golovneva et al., 2023) can produce large-scale reasoning data. Given that the generated data are more friendly and readable for human beings, the correctness of the reasoning is not guaranteed by those rules. Moreover, another line of work such as INT (Wu et al., 2021) performs rule-based synthesis to generate validated proofs, which are both correct and large-scale. However, the data are brutally synthesized so that many generated proofs lack actual meaning. Therefore, we need a more efficient way to generate mathematical data that are large-scale, with accurate intermediate steps, and also meaningful mathematical knowledge to human beings. To fill this gap, we propose
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5207ba09-5c8e-4cf9-bc97-98fa43fcd6a3
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 1 Introduction 23) can produce large-scale reasoning data. Given that the generated data are more friendly and readable for human beings, the correctness of the reasoning is not guaranteed by those rules. Moreover, another line of work such as INT (Wu et al., 2021) performs rule-based synthesis to generate validated proofs, which are both correct and large-scale. However, the data are brutally synthesized so that many generated proofs lack actual meaning. Therefore, we need a more efficient way to generate mathematical data that are large-scale, with accurate intermediate steps, and also meaningful mathematical knowledge to human beings. To fill this gap, we propose MUSTARD, a data generation framework that uniformly synthesizes large-scale and high-quality mathematical data by combining the advantages of LLMs in verbalization and formal theorem provers in rigorous data validation. Specifically, MUSTARD first samples a few mathematical concepts from a predefined list and prompts an LLM to generate a related question described in natural language. Then, it applies the LLM to generate the corresponding solution in both natural and formal language. Given the generated solution, MUSTARD further validates them using a theorem prover. The passed one is considered to be correct and is a high-quality data point. The invalid one on the other hand is considered to be a challenging sample, which will be further combined with the error messages to prompt the LLM for a solution revision, and added as a challenging data point. By applying the proposed MUSTARD one can obtain large amounts of problems and theorems with desired mathematical concepts and domains. Eventually, we build a mathematical dataset with validated informal and formal solutions, named MUSTARDSAUCE (MUSTARD resource). We conduct extensive data analysis and experiments on the generated MUSTARDSAUCE. Through deep inspection of the data, we find that MUSTARD generates interesting and reasonable math problems by creatively combining two mathematical concepts, and MUSTARDSAUCE is diverse and has a high proportion of difficult data. We also observe that the prover is consistent with human evaluation, where humans usually consider a validated solution to have a higher quality than those without a formal validation process. Lastly, we fine-tune smaller-scale language models on MUS- TARDSAUCE. The fine-tuned Llama 2-7B achieves improvements by 20.9% on zero-shot inference on GSM8K and achieves 8.7 of pass@1 on mathlib. These results demonstrate the effectiveness of MUSTARDSAUCE in
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5aaa4d09-afbc-4f32-806e-0647659d3f67
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 1 Introduction mathematical concepts, and MUSTARDSAUCE is diverse and has a high proportion of difficult data. We also observe that the prover is consistent with human evaluation, where humans usually consider a validated solution to have a higher quality than those without a formal validation process. Lastly, we fine-tune smaller-scale language models on MUS- TARDSAUCE. The fine-tuned Llama 2-7B achieves improvements by 20.9% on zero-shot inference on GSM8K and achieves 8.7 of pass@1 on mathlib. These results demonstrate the effectiveness of MUSTARDSAUCE in improving the mathematical reasoning capabilities of language models. The contributions of this paper are summarized as follows: 1. We propose a novel framework MUSTARD that can generate high-quality mathematical data (both informal and formal) with an interplay between generative language model and theorem prover assistants. 2. We release the MUSTARDSAUCE, which contains both math word problems and theoremproving problems spanning over four educational levels. Each sample has corresponding informal and formal solutions. 3. We conduct extensive analysis and experiments on the generated data, demonstrating their quality, diversity, and effectiveness in improving language models' mathematical reasoning performance.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e96c87c4-ff90-4097-8d10-a9f4617c3a90
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 2 Related Works Large Language Models for Mathematical Reasoning The growing generative language models (Brown et al., 2020; OpenAI, 2022; 2023) show compelling potential for solving mathematical problems both in natural language proofs (OpenAI, 2023; 2022) and in formal languages with theorem provers (Polu & Sutskever, 2020; Han et al., 2022; Polu et al., 2023). On the other hand, some works explore using language models to automatically translate natural language proofs into formal ones given few-shot demonstrations (Wu et al., 2022; Jiang et al., 2023; Liu et al., 2023). Chain-of-though reasoning (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2023b) is demonstrated beneficial for the LLMs to derive correct answers. However, some recent works (Saparov & He, 2023; Golovneva et al., 2023) observe that the intermediate reasoning steps can be inconsistent. This paper proposes a data generation framework that taps the comprehensive mathematical reasoning capabilities of large language models. It generates mathematical reasoning problems with informal and formal solutions that are step-wise validated by a formal theorem prover. With the framework, we obtain high-quality mathematical data. Synthesizing Mathematical Data Obtaining large-scale high-quality mathematical data is a longstanding challenge. Previous data relies on well-trained annotators to hand-craft and review the formal proofs (Zheng et al., 2022), which is time and labour-consuming and results in a small data scale. Wang & Deng (2020) constructs a neural generator for data synthesis, but it still requires the intervention of human-written data. Besides, Wu et al. (2021) explore using a theorem generator to automatically generate formal proofs with rules. However, the rule-based generation depends on given axioms in specified orders. As a result, the generated data is restricted to a few domains. On the other hand, recent works demonstrate the effectiveness of distilling knowledge from large language models (West et al., 2022; Yuan et al., 2023; Li et al., 2023), and some of them (Wang et al., 2023c; Xu et al., 2023) explore data evolution by properly prompting the language models. The proposed framework explores eliciting mathematical knowledge from large language models to achieve diverse and large-scale mathematical data. In this framework, an interplay between the language model and a
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d36f4c0a-2044-46e7-9e9c-30ffbd328fa8
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 2 Related Works on given axioms in specified orders. As a result, the generated data is restricted to a few domains. On the other hand, recent works demonstrate the effectiveness of distilling knowledge from large language models (West et al., 2022; Yuan et al., 2023; Li et al., 2023), and some of them (Wang et al., 2023c; Xu et al., 2023) explore data evolution by properly prompting the language models. The proposed framework explores eliciting mathematical knowledge from large language models to achieve diverse and large-scale mathematical data. In this framework, an interplay between the language model and a formal proof assistant controls the quality and difficulties of data. Using the proposed framework, we collect a large-scale mathematical dataset that contains diverse and multiple-difficulty math questions with high-quality solutions.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1a492cec-b577-461e-9d5b-35bd63f208b5
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 3 Mustard In this work, we aim to obtain large-scale mathematical data with multi-step annotations and propose MUSTARD to generate diverse and high-quality math and theorem-proving problems with multi-step informal and formal solutions. As shown in Figure 2, MUSTARD consists of three stages. In the first concept seeding stage, MUSTARD samples a set of math concepts as the problem domain. Then in the second solution generation stage, it generates the concept-related problem and solution by prompting an LLM. In the third stage, a theorem prover is used to validate the generated solution. If the solution can not pass the prover, the error message is returned to the second stage for another turn of solution generation. Through interaction between the LLM and a formal proof assistant, MUS- TARD can generate diverse and high-quality data that contains both informal and formal solutions. We describe the details of each stage in this section. 3.1 CONCEPT SEEDING We first define and build a mathematical concept pool that covers as complete sub-subjects in mathematics and educational levels as possible. Specifically, we collect all math courses on the Khan Academy website2, the large-scale online educational platform. The resulting pool includes concepts in four educational levels: elementary school, middle school, high school, and higher education. Each educational level has 5 to 9 math domains, covering different types of math problems such as algebra and geometry. Each domain contains subdivided mathematical concepts to inspect different mathematical abilities like polynomial arithmetic or factorization. Concept statistics and detailed concepts in each domain are demonstrated in Appendix B. Given the concept pool, for each educational level, MUSTARD uniformly samples 1 or 2 concepts from all domains as seeds, and then generates mathematical problems that cover the concepts. In particular, given an educational level, taking 2 concepts from different subjects challenges the model to generate problems that join diverse domains while keeping the problems reasonable.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e95004be-007b-4c8b-9088-eb94c57b6fe1
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 3.2 Proof Generation Given the sampled mathematical concepts, MUSTARD generates math problems and their corresponding solutions. Specifically, MUSTARD leverages the capability of LLMs in generating natural language and code and prompts an LLM to generate the problem statement, its natural language solution, and a formal solution written in Lean. As a result, the LLM needs to complete the following three tasks: (T1) Generating a math problem that relates to the given concepts; (T2) Solving the math problem with a natural language proof; (T3) Performing auto-formalization to translate the written natural language proof into a formalized proof. In this work, we use GPT-4 OpenAI (2023) as the LLM for proof generation. We intend to generate a problem based on educational level, math domains, and concepts. Considering that mathematical problems include proof and calculation, we also introduce the question types into the prompt for generating theorem proving and word problems respectively. Moreover, we do not include any exemplars or other manual interventions except for the sampled concepts. We intend to avoid potential biases brought by the concepts inside the exemplars and achieve a more diverse generation. The prompt template is shown as the following: You are a math expert. Now please come up with a math problem according to the following requirements. The math problem should contain a question part (indicated by ''Problem: ''), a corresponding solution in natural language (indicated by ''Informal proof:''), and a translated formal solution in Lean 3 (indicated by ''Formal proof in Lean 3:''). Please note that the informal proof and the formal proof need to be identical. Please create a [QUESTION TYPE] in the level of [EDUCATIONAL LEVEL] based on the following knowledge point(s): [CONCEPT] in [DOMAIN]; [CONCEPT] in [DOMAIN]. You must respond in the following format: # Problem: ... # Informal proof: ... # Formal proof in Lean 3: ... The "[]" indicates the placeholders for the corresponding question type, educational level, concepts, and domains. Multiple concepts are separated by ";". We retrieved the text after the "Problem:", "Informal proof:" and "Formal proof in Lean 3:" as the generated sample. 3.3 PROOF FILTERING In the proof-fil
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0d131abb-0500-470e-ad64-7c5ba6bfed1c
# Mustard: Mastering Uniform Synthesis Of Theorem And Proof Data ## 3.2 Proof Generation in [DOMAIN]; [CONCEPT] in [DOMAIN]. You must respond in the following format: # Problem: ... # Informal proof: ... # Formal proof in Lean 3: ... The "[]" indicates the placeholders for the corresponding question type, educational level, concepts, and domains. Multiple concepts are separated by ";". We retrieved the text after the "Problem:", "Informal proof:" and "Formal proof in Lean 3:" as the generated sample. 3.3 PROOF FILTERING In the proof-filtering stage, MUSTARD interacts with the Lean Prover (de Moura et al., 2015) to obtain validation messages of the proof steps, which guides data revision and filtering. Specifically, after a formal solution is passed to the Lean Prover and if the prover returns no error message, the corresponding data point is collected into the valid dataset. Otherwise, MUSTARD collects the error messages from the prover and prompts the language model to revise the invalid solution. To help the language model locate the incorrect lines described in the error messages, we also add a line number at the beginning of each line in the formal solutions. The verification and self-refinement are performed in multiple rounds until LLM generates a valid solution. We use the number of rounds to measure the difficulty of the generated sample, assuming a difficult problem is hard to solve by an LLM and requires more rounds of correction. The prompt template of a single round of correction is demonstrated as follows, and the complete prompt template is shown in Table 13 in Appendix C.1:
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
25f0a952-4841-460c-960e-3976023a37ba
# Formal proof (c) in Lean 3: '''lean line 1 <code> line 2 <code> line 3 <code> ... '''
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8505e55a-f5b8-4dca-8799-c12c675db828
# Error messages for Formal proof (c) from Lean Prover: <error messages> 4 EXPERIMENTS 4.1 CASE STUDY We first inspect the data points generated by MUSTARD. Table 1 shows a generated math problem in which MUSTARD creatively combines two mathematical concepts and constructs a reasonable ques- tion. The generated question includes knowledge from both concepts. It is suggested that MUSTARD can join the concepts and construct a reasonable question. Furthermore, Table 2 demonstrates a case that MUSTARD provides solid and comprehensive solutions in both natural language and Lean. Al- though the formal proof is long, it is consistent and passes the prover's validation. It is demonstrated that MUSTARD can generate long valid solutions. 4.2 HUMAN EVALUATION To further explore the quality of the data generated by MUSTARD, we recruit professionals who have expertise in mathematics and the Lean language to perform a sanity check on the data points. We randomly select 200 data points from the generated data, 100 of which pass the Lean Prover (Group Valid) and 100 of which do not (Group Invalid). The sanity check covers the four sections in each data point (i.e., informal statement, informal proof, formal statement, and formal proof), and includes factuality check and consistency check. Specifically, a high-quality data point should have a factually correct informal statement (D1) and a correct solution (D4). The formal statement and proof should be aligned with the informal descriptions (D5, D6). Moreover, the desired data point should meet the specified seed concepts (D2) and question type (D3). The six inspection dimensions and their requirements are demonstrated in Table 3. A data point is scored 1 in a dimension if it meets the requirement, otherwise, it gets 0. The accuracies of Group Valid and Group Invalid in each dimension are demonstrated in Table 3. We also report the corresponding p-value in each dimension. (D4) and (D6) show significant differences in accuracy between the two groups. The results indicate that high-quality data points have significantly better auto-formalization results. As a result, given the validation of formal proofs by the Lean Prover, the data points have guaranteed high-quality informal proofs. Moreover, (D1) also shows significance with the inspected data scaled up. The differences in statement alignment (D5) and informal statement relevance (D2) of the two groups are less significant. Furthermore, no significant differences are observed
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e68775a9-3888-45c8-a794-f455d6a26e90
# Error messages for Formal proof (c) from Lean Prover: Table 3. We also report the corresponding p-value in each dimension. (D4) and (D6) show significant differences in accuracy between the two groups. The results indicate that high-quality data points have significantly better auto-formalization results. As a result, given the validation of formal proofs by the Lean Prover, the data points have guaranteed high-quality informal proofs. Moreover, (D1) also shows significance with the inspected data scaled up. The differences in statement alignment (D5) and informal statement relevance (D2) of the two groups are less significant. Furthermore, no significant differences are observed in question type classification (D3), which indicates that Lean Prover validation in MUSTARD does not significantly influence the classification. Overall, the human evaluation results suggest that formally validated data have significantly higher quality.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a1aef054-1091-42d3-aa49-54ca74755d78
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | Question Type: Theorem Proving. | Educational Level: Middle School. | k=2. | |-----------------------------------|-------------------------------------|-----------------------| | Concept(s): | | | | Geometry | in 8th grade; | Algebraic expressions | Informal Statement Given a rectangle ABCD where AB is x + 5 and AD is 2x - 3. Prove that the area of the rectangle ABCD is (2x2 + 7x − 15) square units. | Question Type: Word Problem. | Educational Level: Higher Education. | k=1. | |--------------------------------|----------------------------------------|--------| Concept(s): *Series* in Integral Calculus. n equals ln(2). Informal Statement Prove that the integral of the series � (from n = 1 to infinity) of −1(n+1) n as the alternating harmonic series, which is known to converge to ln(2). Informal Proof We start by recognizing the series � (from n = 1 to infinity) of −1(n+1) Formal Statement and Proof import data.real.basic import analysis.special functions.pow open real begin lemma integral of series : � x in 0..1, (ln (1 + x)) = �′ (n : N), ((-1)ˆ(n+1) / (n + 1)) := have h der : ∀ x ∈ set.Icc (0 : R) 1, deriv (λ x, (ln (1
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1c6e5802-5671-4582-ac9a-a7cae4714316
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First the series � (from n = 1 to infinity) of −1(n+1) Formal Statement and Proof import data.real.basic import analysis.special functions.pow open real begin lemma integral of series : � x in 0..1, (ln (1 + x)) = �′ (n : N), ((-1)ˆ(n+1) / (n + 1)) := have h der : ∀ x ∈ set.Icc (0 : R) 1, deriv (λ x, (ln (1 + x))) x = 1 / (1 + x), { intros x hx, have h pos : 0 < 1 + x := by nlinarith, exact deriv ln one add h pos }, have h int : interval integral (λ x, 1 / (1 + x)) 0 1 volume = �′ (n : N), ((-1)ˆ(n+1) / (n + 1)), { have h frac : ∀ (n : N), � x in 0..1, xˆn = 1 / (n + 1), { intro n, calc � x in 0..1, xˆn = [xˆ(n+1) / (n+1)] | 0..1 : integral pow n ... = 1 / (n + 1) : by { rw integral interval, simp } }, rw [interval integral.integral of le, h frac], simp, linarith }, have h eq : � x in 0..1, (ln (1 + x)) = � x in 0..1, 1 / (1 + x), { congr, ext, exact h der x (set.mem Icc.mpr 〈by nlinarith, by nlinarith〉) }, rw [h eq, h int], end have 5,866 valid data points that pass the Lean Prover. We denote this subset MUSTARDSAUCE- valid. We then extract the same number of invalid data points as the subset of MUSTARDSAUCE
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
eeec45a8-635a-4bbf-9925-4595f4ed21f7
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First (ln (1 + x)) = � x in 0..1, 1 / (1 + x), { congr, ext, exact h der x (set.mem Icc.mpr 〈by nlinarith, by nlinarith〉) }, rw [h eq, h int], end have 5,866 valid data points that pass the Lean Prover. We denote this subset MUSTARDSAUCE- valid. We then extract the same number of invalid data points as the subset of MUSTARDSAUCE- invalid, and extract an equal size of random subset MUSTARDSAUCE-random. Each MUS- TARDSAUCE subset contains 5,866 data points. Moreover, we randomly split MUSTARDSAUCE- valid into 5,866 training data, 500 validation data, and 500 test data for benchmarking model per- formances on the dataset. We denote the test set as MUSTARDSAUCE-test. Furthermore, we also test on the entire generated data set with 28,316 data points, which we denote MUSTARDSAUCE-tt. We employ LoRA (Hu et al., 2021) for fine-tuning the open-source GPT2-large (Radford et al., 2019), Llama 2-7B and Llama 2-70B (Touvron et al., 2023) on each MUSTARDSAUCE subset. De- | Inspection Dimension | Requirement | Valid | |------------------------------------------------------------------|---------------|---------| | p | | | | -value
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8b0b3a2d-1abb-4959-a9e6-1ea8f5bbc86b
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First ------------------------------------------------------------------|---------------|---------| | p | | | | -value | | | | (D1) IS Correctness | | | | Whether the informal statement is factually correct. | | | | 93.50 | 83.5 | | | 0.00167 | | | | (D2) IS Relevance
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ca99bafa-d244-4b51-bff5-6ecc203d91f3
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First 83.5 | | | 0.00167 | | | | (D2) IS Relevance | | | | Whether the informal statement is relevant to each seed | | | | concept. | | | | 87.50 | 92.5 | 0.09604 | | (D3) RT Classification | | | | Whether the informal statement is of the required question type. | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a00c006-faae-40f1-837a-f10532165237
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First 92.5 | 0.09604 | | (D3) RT Classification | | | | Whether the informal statement is of the required question type. | | | | 67.00 | 68.5 | 0.74903 | | (D4) IP Correctness | | | | Whether the informal proof correctly solves the informal | | | | statement. | | | | 88.50 | 73.5 | | | 0.00012
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3c96f7a8-1a9d-4bce-8e7d-5934586d6afe
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | | 88.50 | 73.5 | | | 0.00012 | | | | (D5) IS-FS Alignment | | | | Whether the informal statement and the formal statement | | | | describe the same problem and are aligned with each other. | | | | 74.00 | 66.5 | 0.10138 | | (D6) IP-FP Alignment
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
387fb55d-0100-42cb-8d36-4b8339a51e6b
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | 66.5 | 0.10138 | | (D6) IP-FP Alignment | | | | Whether the informal proof and the formal proof describe the | | | | same solution and have aligned proof steps. | | | | 72.00 | 54 | | | 0.00018 | | | | MODEL | Zero | |-------------|--------| | (G) | | | Few | | | (G) | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d6c125b0-49c4-48f3-a4de-f542e78748d2
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | | MODEL | Zero | |-------------|--------| | (G) | | | Few | | | (G) | | | Zero | | | (M) | | | Few | | | (M) | | | MODEL | Zero | | (G) | | | Few | | | (G) | | | Zero | | | (M) | | | Few | | | (M) | | | Baselines | | | GPT2-large | 3.4 | | > | | | gt | | | 14.6 | 17.4 | | Llama 2-7B | 7.2 | | > | | | gt
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c84cac07-82ac-4bad-98ca-b6ac000bd0d8
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | GPT2-large | 3.4 | | > | | | gt | | | 14.6 | 17.4 | | Llama 2-7B | 7.2 | | > | | | gt | | | 24.5 | 28.2 | | 16.1 | 18.9 | | Fine-tuning | | | GPT2-large | | | > | | | tt | | | 4.2 | 6.8 | | > | | | tt | | | > | | | gt | | | GPT2-large | | | > | | | in | | | 3.9 | 6.4 | | > | | | in | | | > | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7f383b21-73d8-4880-b6ea-bae068f0a1c7
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | > | | | in | | | 3.9 | 6.4 | | > | | | in | | | > | | | gt | | | 15.4 | 17.7 | | GPT2-large | | | > | | | ra | | | 4.1 | 6.7 | | > | | | ra | | | > | | | gt | | | 15.7 | 18.5 | | GPT2-large | | | > | | | va | | | 4.6 | | | (+12.20%) | | | 7.0 | | | (+4.48%) | | | 1.8 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
02180728-00c1-4269-8de6-78ac609be166
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | va | | | 4.6 | | | (+12.20%) | | | 7.0 | | | (+4.48%) | | | 1.8 | | | (+28.57%) | | | 2.8 | | | (+27.27%) | | | GPT2-large | | | > | | | va | | | > | | | gt | | | 16.5 | | | (+5.10%) | | | 20.1 | | | (+8.65%) | | | 5.6 | | | (+16.67%) | | | 8.4 | | | (+7.69%) | | | 27.4 | 31.5 | | > | | | tt
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0518afb5-7166-4fa2-bcf4-2013d2837f0a
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | (+16.67%) | | | 8.4 | | | (+7.69%) | | | 27.4 | 31.5 | | > | | | tt | | | 9.6 | 16.0 | | > | | | tt | | | > | | | gt | | | Llama 2-7B | | | > | | | in | | | 9.1 | 14.9 | | > | | | in | | | > | | | gt | | | 26.9 | 30.3 | | Llama 2-7B | | | > | | | ra | | | 9.5 | 15.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
900daa7e-bea4-47e5-9e07-961a25983255
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | gt | | | 26.9 | 30.3 | | Llama 2-7B | | | > | | | ra | | | 9.5 | 15.4 | | > | | | ra | | | > | | | gt | | | 27.1 | 30.7 | | Llama 2-7B | | | > | | | va | | | 10.3 | | | (+8.42%) | | | 16.9 | | | (+9.74%) | | | 3.2 | | | (+6.67%) | | | 4.2 | | | (+16.67%) | | | Llama 2-7B | | | > | | | va |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4070ff82-9dff-48a6-95c7-5c265b7311c5
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | (+6.67%) | | | 4.2 | | | (+16.67%) | | | Llama 2-7B | | | > | | | va | | | > | | | gt | | | 27.9 | | | (+2.95%) | | | 32.5 | | | (+5.86%) | | | 13.8 | | | (+9.52%) | | | 15.0 | | | (+5.63%) | | tailed model configuration and training procedure are described in Appendix F. For the task of math word problems, we use GSM8K (Cobbe et al., 2021) and MATH dataset Hendrycks et al. (2021)3 for evaluation. For evaluating automated theorem proving, we use Mathlib4 and the miniF2F (Zheng et al., 2022) benchmark. We also evaluate models on MUSTARDSAUCE-test after being finetuned on the MUSTARDSAUCE-valid training split. Tables 4 and 5 demonstrate the model performances. We also follow Han et al. (2022) to ablate the fine-tuning steps and demonstrate the results in Table 6. In general, fine-tuning the models on MUSTARDSAUCE improves the mathematical reasoning of the models. On average
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a37e93ab-f7ce-43f1-b42e-a208d2a1ef92
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First al. (2021)3 for evaluation. For evaluating automated theorem proving, we use Mathlib4 and the miniF2F (Zheng et al., 2022) benchmark. We also evaluate models on MUSTARDSAUCE-test after being finetuned on the MUSTARDSAUCE-valid training split. Tables 4 and 5 demonstrate the model performances. We also follow Han et al. (2022) to ablate the fine-tuning steps and demonstrate the results in Table 6. In general, fine-tuning the models on MUSTARDSAUCE improves the mathematical reasoning of the models. On average, we have an 18.15% relative performance gain after fine-tuning with MUS- TARDSAUCE-valid compared with fine-tuning with MUSTARDSAUCE-random in ATP (Table 5) and 11.01% in MWP (Table 4). The fine-tuned Llama 2-7B achieves average gains of 15.41% and 8.18% on ATP and MWP, and the fine-tuned GPT 2-large 20.89% and 15.41%, respectively. Specifically, in ATP, the Llama 2-7B achieves significant performance gains of 16.00% on both mathlib and miniF2F, while increasing 17.31% on the MUSTARDSAUCE-test. In MWP, the performance improvements are also consistent in two datasets and both zero-shot and few-shot inference. We further compare the results fine-tuned with MUSTARDSAUCE-tt and MUSTARDSAUCE-valid. We find that models fine-tuned with the entire generated data are inferior to models fine-tuned with MUSTARDSAUCE-valid. Although the increase in the amount of fine-tuned data makes the model perform better compared to fine-tuning on MUSTARDSAUCE-invalid and MUSTARDSAUCE- | MODEL | mathlib | miniF2F | |-------------|-----------|-----------| | test | | | | MODEL | mathlib | miniF2F | | test |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
60b2f6a6-ea0c-486c-9f16-fb633937314b
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First model perform better compared to fine-tuning on MUSTARDSAUCE-invalid and MUSTARDSAUCE- | MODEL | mathlib | miniF2F | |-------------|-----------|-----------| | test | | | | MODEL | mathlib | miniF2F | | test | | | | Baselines | | | | GPT2-large | 0.0 | 0.0 | | > | | | | mt | | | | 5.6 | 2.9 | 8.6 | | Llama 2-7B | 0.0 | 0.0 | | > | | | | mt | | | | 14.3 | 7.0 | 10.8 | | Fine-tuning | | | | GPT2-large | | | | >
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b1d03a96-d2b8-4e38-b0c9-7b36f4a52bc9
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | | 14.3 | 7.0 | 10.8 | | Fine-tuning | | | | GPT2-large | | | | > | | | | in | | | | 2.0 | 0.0 | 6.0 | | > | | | | in | | | | > | | | | mt | | | | 5.9 | 2.0 | 8.2 | | GPT2-large | | | | > | | | | ra | | | | 3.0
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4d73b1d4-62f6-42fd-844f-a72ce06c54dc
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | 8.2 | | GPT2-large | | | | > | | | | ra | | | | 3.0 | 1.2 | 7.0 | | > | | | | ra | | | | > | | | | mt | | | | 6.6 | 2.9 | 9.6 | | GPT2-large | | | | > | | | | va | | | | 3.7 | | | | (+23.33%) | | | | 1.6 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
827ac0f2-c5cf-43ed-875d-cfad7c9811f4
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | va | | | | 3.7 | | | | (+23.33%) | | | | 1.6 | | | | (+33.33%) | | | | 8.3 | | | | (+18.57%) | | | | GPT2-large | | | | > | | | | va | | | | > | | | | mt | | | | 7.4 | | | | (+12.12%) | | | | 3.7 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cd90c470-37c2-452f-80ca-bba3b4f3991b
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | mt | | | | 7.4 | | | | (+12.12%) | | | | 3.7 | | | | (+27.59%) | | | | 10.6 | | | | (+10.42%) | | | | Llama 2-7B | | | | > | | | | tt | | | | 8.3 | 2.6 | 11.7 | | > | | | | tt | | | | > | | | | mt | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fc1eccfc-1b0f-411b-a814-1fde1621df48
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | | tt | | | | > | | | | mt | | | | 15.1 | 7.0 | 13.6 | | Llama 2-7B | | | | > | | | | in | | | | 5.8 | 1.2 | 8.6 | | > | | | | in | | | | > | | | | mt | | | | 11.6 | 5.7 | 12.6 | | Llama 2-7B | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6cb5a14d-98b4-4b47-b7fc-950561d21646
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | | mt | | | | 11.6 | 5.7 | 12.6 | | Llama 2-7B | | | | > | | | | ra | | | | 7.5 | 2.5 | 10.4 | | > | | | | ra | | | | > | | | | mt | | | | 14.7 | 6.6 | 13.2 | | Llama 2-7B | | | | > | | | | va | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
64e03979-7947-4600-afef-6b08cc8dd3d2
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | 6.6 | 13.2 | | Llama 2-7B | | | | > | | | | va | | | | 8.7 | | | | (+16.00%) | | | | 2.9 | | | | (+16.00%) | | | | 12.2 | | | | (+17.31%) | | | | Llama 2-7B | | | | > | | | | va | | | | > | | | | mt | | | | 15.7
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f0ee7dbf-029e-4add-9df2-37e9488af81f
# Error messages for Formal proof (c) from Lean Prover: ## 4.3 Data Quality By Downstream Application To Evaluate The Impact Of Mustardsauce On Enhancing Mathematical Reasoning Abilities, We Use The Data To Fine-Tune Smaller-Scale Language Models And Evaluate Them On Math Word Problems (Mwp) And Automated Theorem Proving (Atp). Specifically, Given All The Generated Data, We First | | | va | | | | > | | | | mt | | | | 15.7 | | | | (+6.80%) | | | | 7.8 | | | | (+18.18%) | | | | 14.4 | | | | (+18.18%) | | | MODEL test GPT2-large > va 8.3 GPT2-large > va > mt 10.6 GPT2-large > mt > va 9.8 Llama 2-7B > va 12.2 Llama 2-7B > va > mt 14.4 Llama 2-7B > mt > va 13.8 random, the model's performance still lags behind that of fine-tuning on smaller amounts but higher quality data. Therefore, our proposed framework that introduces the theorem prover is effective and beneficial. Furthermore, complementary experimental results of larger Llama 2-70B are demonstrated in Table 28 in Appendix G. The results suggest that our method remains effective when fine-tuning a larger language model.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d353a771-b623-4496-9dde-a51b03413c58
# Error messages for Formal proof (c) from Lean Prover: ## 4.4 Impact Of Data Scalability To further study the impact of data scale on the fine-tuning results, we randomly sample 75%, 50%, 25%, and 0% data from MUSTARDSAUCE-valid and fine-tune Llama 2-7B. The results are shown in Figure 3. In general, the results on all datasets increase as the fine-tuning data scales up. Specifically, performances on the MUSTARD-test and mathlib have the most significant growth without a decrease in the growth rate. Therefore we expect further performance improvements when more high-quality data are included. 4.5 PASS RATE We study the mathematical generation ability of MUSTARD by investigating its pass rates on generating valid data points. The pass@1 results of the generated formal proofs of GPT-4 OpenAI (2023) and GPT-3.5 OpenAI (2022) are shown in Table 7. We have the following observations. First of all, the overall pass@1 results are high, showing the LLMs especially GPT-4 capable of performing zero-shot mathematical reasoning. Second, the pass rates of word problems are generally higher than those of theorem proving. It indicates that the word problems are relatively easier and more familiar to the LLMs, while theorem proving is more challenging. Third, the all-at-once generation and step-by-step generation have similar pass rates at lower educational levels. For more challenging questions such as those at the high school level and higher educational level, step-bystep generation shows slight advantages over the all-at-once generation. This indicates that dividing and conquering (T1), (T2), and (T3) helps the model to generate higher-quality formal proofs, but the improvement is limited. Last but not least, the improvements in pass rates after 1-step and 2- | Thoerem Proving | Word Problem | |-------------------|----------------| | All (GPT-4) | Step | | (GPT- | | | 4) | | | All
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
be96374d-48eb-433b-ad1b-afd2f3353961
# Error messages for Formal proof (c) from Lean Prover: ## 4.4 Impact Of Data Scalability m Proving | Word Problem | |-------------------|----------------| | All (GPT-4) | Step | | (GPT- | | | 4) | | | All | | | (GPT- | | | 3.5) | | | All (GPT-4) | Step | | (GPT- | | | 4) | | | All | | | (GPT- | | | 3.5) | | | #correct=0 | 1 ( | | ∆ | | | ) | 2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
55774cbe-d5b6-46a2-a409-7d5ee0ffd2e3
# Error messages for Formal proof (c) from Lean Prover: ## 4.4 Impact Of Data Scalability | | 3.5) | | | #correct=0 | 1 ( | | ∆ | | | ) | 2 ( | | ∆ | | | ) | 0 | | ∆ | | | ) | 2 ( | | ∆ | | | ) | 0 | | elem | 26.0 | | midd | 16.4 | | k=1 | | | high | 6.8 | | higher | 2.1
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
07b5774e-fa0c-4d0f-aed4-cd96c95bcee5
# Error messages for Formal proof (c) from Lean Prover: ## 4.4 Impact Of Data Scalability | | midd | 16.4 | | k=1 | | | high | 6.8 | | higher | 2.1 | | elem | 24.1 | | midd | 14.0 | | k=2 | | | high | 3.8 | | higher | 1.1 | step corrections are significant. For example, theorem proving at elementary-school level with 1 seed concept improves by 22.0% after 1-step correction, and 33.9% after 2-step correction. Word problem at the elementary-school level with 1 seed concept improves 45.3% after 2-step correction. The most difficult setting of generating theorem proving data at the higher-educational level with 2 seed concepts achieves 2.8% improvement after 2-step correction, and the word problem counterpart achieves 6.1% improvement. This indicates that the LLMs have a great potential for self-correction given the error message feedback and limited instructions. 4.6 DIVERSITY AND DIFFICULTY We compute ROUGE-L (Lin, 2004) to check the diversity of generated informal statements and proofs. The resulting ROUGE-L scores are below 0.25 and indicate high data diversity. We demonstrate detailed computation and results in Appendix D.2. We then investigate the proof lengths in MUSTARD
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
271da0af-7459-466d-9e5e-3ea67e57c03d
# Error messages for Formal proof (c) from Lean Prover: ## 4.4 Impact Of Data Scalability achieves 2.8% improvement after 2-step correction, and the word problem counterpart achieves 6.1% improvement. This indicates that the LLMs have a great potential for self-correction given the error message feedback and limited instructions. 4.6 DIVERSITY AND DIFFICULTY We compute ROUGE-L (Lin, 2004) to check the diversity of generated informal statements and proofs. The resulting ROUGE-L scores are below 0.25 and indicate high data diversity. We demonstrate detailed computation and results in Appendix D.2. We then investigate the proof lengths in MUSTARDSAUCE and the distributions are demonstrated in Figure 4. We count both reasoning steps of formal statement-proof pairs and steps of formal proof only, which are shown on the leftand right-hand side of Figure 4, respectively. It is demonstrated that proof length increases over educational levels. Solving elementary problems needs about 5 to 10 steps while solving highereducational problems requires a median number between 10 to 15 steps. The most challenging problems require around 30 reasoning steps or about 20 formal proof steps. Therefore, MUSTARD- SAUCE produces diverse mathematical problems with multiple topics and difficulties.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
24fa520a-49d1-4ee5-a96b-710f4c940083
# Error messages for Formal proof (c) from Lean Prover: ## 5 Conclusion In this paper, we introduce MUSTARD to automatically generate mathematical datasets with highquality solutions that cover a variety of mathematical skills. Leveraging the LLM and Lean Prover, MUSTARD can generate the problem statement, informal solution, and formal solution. and use a Lean Prover to automatically verify the formal solution and provide feedback for revision. At last, we apply the proposed MUSTARD and obtain 5,866 problems with step-by-step solutions that cover different educational levels and mathematical abilities. The obtained dataset has shown its high quality, diversity, and effectiveness in improving language models' mathematical reasoning performance, showing the great potential of our proposed MUSTARD and dataset in further research on language models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6958f68a-3f05-40e0-bce2-4754b0f72078
# Error messages for Formal proof (c) from Lean Prover: ## Acknowledgements This work was supported in part by the National Key R&D Program of China under Grant No. 2020AAA0109700, Guangdong Outstanding Youth Fund (Grant No. 2021B1515020061), National Natural Science Foundation of China (NSFC) under Grant No.61976233, Mobility Grant Award under Grant No. M-0461, Shenzhen Science and Technology Program (Grant No. GJHZ20220913142600001), Nansha Key RD Program under Grant No.2022ZD014, and National Natural Science Foundation of China under Grant No.62006255. We thank MindSpore for the partial support of this work, which is a new deep learning computing framework5. The authors would also like to thank Hui Jin, Jianhao Shen, Chengwu Liu, Cen Li, and Junhao Cheng for their hard work on the manual check related to Section 4.2.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bc044a66-64fc-4de5-ad56-0e75565de938
# Error messages for Formal proof (c) from Lean Prover: ## A Future Works Current formal validation still suffers mild inconsistency between the #reduce statements and various kinds of theorem proofs. In the future, we will explore more rigorous and careful data filtering. We will also explore data generation and mathematical reasoning via the same language model, which is an interesting setup to study large language models' proficiency of mathematical reasoning. Moreover, the ablation study on data scalability shows consistent performance increases when more data from MUSTARD are introduced, suggesting a great potential for scalability. Fortunately, MUSTARD reduces the cost of acquiring such high-quality step-by-step complex reasoning data and obtains correct, scalable, and reusable data. Therefore, in future work, we would love to build a community in which all members can join the data synthesis process, and acquire and share more high-quality data with the whole community.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
531adbc9-6474-40e5-8e2f-d742f98921a5
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts Table 8 shows the concept statistics. Concepts are grouped into multiple domains mainly according to subjects, except that at the elementary level are grouped by grades due to the lack of subject division. Tables 9, 10, 11 and 12 demonstrate the detailed concepts in each domain. | | | | | Elementary School | Middle School | High School | Higher Education | |------------------------|----|---------|----|---------------------|-----------------|---------------|--------------------| | Domains | # | Domains | # | Domains | # | Domains | # | | 16 | | | | | | | | | 12 | | | | | | | | | 7 | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3f46e523-d721-47bf-94a0-b6ff66327c2a
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | 7 | | | | | | | | | 7 | | | | | | | | | 8 | | | | | | | | | 15 | | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c1032fce-78f4-42d5-a0a0-ddae0e15a16b
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | 15 | | | | | | | | | 14 | | | | | | | | | 14 | | | | | | | | | 6 | | | | | | | | | 5 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aed80b45-4c9d-4f58-a008-7146793244c3
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | | | | | 5 | | | | | | | | | 9 | | | | | | | | | 4 | | | | | | | | | 1st grade | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bcdc4198-eefd-4b03-aa2f-52c4175a276f
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | 1st grade | | | | | | | | | 2nd grade | | | | | | | | | 3rd grade | | | | | | | | | 4th grade | | | | | | | | | 5th grade
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a0481b06-a4a5-4c32-96b6-aad38ba6e2b9
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | 4th grade | | | | | | | | | 5th grade | | | | | | | | | 6th grade | | | | | | | | | 3 | | | | | | | | | 8 | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8902fe58-e0d0-4850-a0cf-4bb07831ea68
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | 8 | | | | | | | | | 14 | | | | | | | | | 14 | | | | | | | | | 16 | | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8d798d15-f3bd-4320-bba5-cbd27bab715c
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | 16 | | | | | | | | | 8 | | | | | | | | | 7th grade | | | | | | | | | 8th grade | | | | | | | | | Algebra basics | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6fe64f0f-66f2-46f4-9ac8-279a9cf81c6b
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | | | Algebra basics | | | | | | | | | Pre-algebra | | | | | | | | | Basic geometry | | | | | | | | | and measurement | 14 | | | | | | | | 10
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
16a58b5e-b808-4152-b464-b750f7d01605
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | and measurement | 14 | | | | | | | | 10 | | | | | | | | | 12 | | | | | | | | | 5 | | | | | | | | | Algebra 1 | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
37599367-da1a-451c-9358-f2d8d47bb76b
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | | Algebra 1 | | | | | | | | | Algebra 2 | | | | | | | | | High school | | | | | | | | | geometry | | | | | | | | | Trigonometry
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3d2c1485-62ac-46c2-9f02-1bc220209895
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | geometry | | | | | | | | | Trigonometry | | | | | | | | | Statistics and | | | | | | | | | probability | | | | | | | | | High school statistics | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b44e8d88-013a-4345-9f62-00d04c3cc53e
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | High school statistics | | | | | | | | | Precalculus | | | | | | | | | Calculus 1 | | | | | | | | | Calculus 2 | | | | | | | | | 16 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
84c640d5-b2c2-4b97-a022-e2a4d181458c
# Error messages for Formal proof (c) from Lean Prover: ## B Mathematical Concepts | | | | | | | | 16 | | | | | | | | | 7 | | | | | | | | | 10 | | | | | | | | | 8 | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08957v1.md", "file_path": "paper_data/2402.08957v1.md", "file_size": 123832, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }