doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
aaab2896-dbaf-4857-a4cc-7225eb2c123e
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## Molecule Description SIDER (PP-SIDER) Query: Are there any known side effects of <SMILES> CC1=CC(C)=C(NC(=O)CN(CC(=O)O)CC(=O)O)C(C)=C1Br </SMILES> affecting the heart? Response: <BOOLEAN> No </BOOLEAN> Molecule Captioning (MC) Query: Describe this molecule: <SMILES> CCOC(=O)C1=CN=CN1[C@H](C)C1=CC=CC=C1 </SMILES> Response: The molecule is an imidazole derivative with short-acting sedative, hypnotic, and general anesthetic properties. Etomidate appears to have gamma-aminobutyric acid (GABA) like effects, mediated through GABA-A receptor. The action enhances the inhibitory effect of GABA on the central nervous system by causing chloride channel opening events which leads to membrane hyperpolarization. Forward Prediction (FS) Query: <SMILES> NC1=CC=C2OCOC2=C1.O=CO</SMILES> Based on the reactants and reagents given above, suggest a possible product. Response: A possible product can be <SMILES> O=CNC1=CC=C2OCOC2=C1 </SMILES> Molecule Generation (MG) Query: Give me a molecule that satisfies the conditions outlined in the description: The molecule is a member of the class of tripyrroles that is a red-coloured pigment with antibiotic properties produced by Serratia marcescens. It has a role as an antimicrobial agent, a biological pigment, a bacterial metabolite, an apoptosis inducer and an antineoplastic agent. It is a tripyrrole, an aromatic ether and a ring assembly. Response: Here is a potential molecule: <SMILES> CCCCCC1=C(C)NC(/C=C2\N=C(C3=CC=CN3)C=C2OC)=C1 </SMILES> Retrosynthesis (RS) Query
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2e122ff9-a16a-4afc-a9b3-2dbedc95b44b
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## Molecule Description Serratia marcescens. It has a role as an antimicrobial agent, a biological pigment, a bacterial metabolite, an apoptosis inducer and an antineoplastic agent. It is a tripyrrole, an aromatic ether and a ring assembly. Response: Here is a potential molecule: <SMILES> CCCCCC1=C(C)NC(/C=C2\N=C(C3=CC=CN3)C=C2OC)=C1 </SMILES> Retrosynthesis (RS) Query: Identify possible reactants that could have been used to create the specified product. <SMILES> CC1=CC=C(N)N=C1N </SMILES> Response: <SMILES> CC(C#N)CCC#N.N </SMILES> ulously incorporate tasks with meaningful applications, collect data from diverse data sources, and apply rigorous scrutiny for quality control. The resulting dataset consists of 14 tasks and over 3M samples, laying a solid foundation for training and evaluating LLMs for chemistry tasks. The tasks and examples in the dataset are illustrated in Figure 1. Furthermore, we build a series of LLMs for chemistry named **LlaSMol** by fine-tuning four open-source LLMs namely Galactica, Llama 2, Code Llama, and Mistral, on SMolInstruct with LoRA (Hu et al., 2022). We conduct a comprehensive comparison between LlaSMol and various LLMs, which reveals that LlaSMol outperforms existing LLMs including GPT-4 across all the tasks by a substantial margin. Interestingly, we found that the Mistralbased model can significantly lead the performance among all the LlaSMol models, showcasing Mistral's great potential on chemistry. Moreover, by tuning only a small fraction of parameters once for all the tasks, LlaSMol can achieve comparable results to SoTA task-specific models that are designed and trained specifically for each individual task. Our further exploration reveals that adding trainable parameters could lead to huge performance boost. This suggests that larger scale training combined with more sophisticated base Property Prediction
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b46fa5df-44cf-4377-ac73-f327d778fef6
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## Chemical Reaction models could promisingly allow LLMs to match or surpass SoTA task-specific models on chemistry. In answering the critical questions raised earlier, our findings underscore the great potential of LLMs to effectively perform chemistry tasks. While there is still room for further improvement, our LlaSMol series can serve as a strong set of foundation models for chemistry in the future.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cc37820d-7fb3-408e-ae86-32dab76234da
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 2. Related Work Task-specific Models for Chemistry. In recent years, many deep learning models have been developed to tackle different chemistry tasks. For example, Molecular Transformer (Schwaller et al., 2019) and RSMILES (Zhong et al., 2022) formulate forward synthesis and retrosynthesis prediction as sequence-to-sequence translation problems. Chemformer (Irwin et al., 2022) pretrains a transformer model on a largescale SMILES dataset and fine-tunes it for various downstream tasks, such as forward synthesis and property prediction. Uni-Mol (Zhou et al., 2023) incorporates 3D information of molecules into the pretraining of a transformer model and fine-tunes it for downstream tasks. MolT5 (Edwards et al., 2022) first pretrains a T5 model on both SMILES and natural language, and then fine-tunes it to translate SMILES into natural language (i.e., molecule captioning) or vice versa (i.e., molecule generation). Despite their effectiveness, these models operate on single tasks and therefore cannot harness knowledge shared across diverse chemistry tasks like LLMs. LLMs for Chemistry. Recent efforts have integrated LLMs with chemistry to solve key chemistry problems, such as molecule property prediction and retrosynthesis. These efforts can be divided into two categories: (1) benchmark studies, (2) fine-tuning LLMs with new datasets. Multiple benchmark studies have evaluated (White et al., 2023; Guo et al., 2023; Jablonka et al., 2023; Liu et al., 2023) the capabilities and limitations of different off-the-shelf LLMs, such as GPT-4 and Llama, on chemistry problems. For example, Guo et al. (2023) finds that these LLMs do not perform well on chemistry tasks and often produce chemically implausible outputs. These findings highlight the need for further efforts to improve LLMs via fine-tuning for chemistry tasks. To improve LLMs for chemistry tasks, multiple instruction tuning datasets specific to chemistry have been developed. Mol-Instructions (Fang et al., 2023) consists of 1.3M instructions for multiple small molecule tasks. However, according to our results (Section 5.3), fine-tuning on their dataset does not consistently improve LLMs' performance when compared to LLMs without fine-tuning. Drugchat (
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a94d5ca-1123-416d-9c0c-b455d38d0d2d
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 2. Related Work these LLMs do not perform well on chemistry tasks and often produce chemically implausible outputs. These findings highlight the need for further efforts to improve LLMs via fine-tuning for chemistry tasks. To improve LLMs for chemistry tasks, multiple instruction tuning datasets specific to chemistry have been developed. Mol-Instructions (Fang et al., 2023) consists of 1.3M instructions for multiple small molecule tasks. However, according to our results (Section 5.3), fine-tuning on their dataset does not consistently improve LLMs' performance when compared to LLMs without fine-tuning. Drugchat (Liang et al., 2023) collects an instruction tuning dataset with 10.8K drug molecules along with 143K instructions regarding their drugspecific properties. MolOpt-Instructions (Ye et al., 2023) consists of instructions with 1M molecule pairs for molecule optimization on six properties, in which each pair has similar molecules with different properties. Compared with these datasets, SMolInstruct is much larger and covers a more diverse and comprehensive set of chemistry tasks. This could enable LLMs to better understand molecule representations and learn chemistry knowledge across tasks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2e96cc6a-2f9a-4940-bdd3-9b331313dab4
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 3. Preliminaries Molecules form the basis of chemistry, which fundamentally determines the properties and behaviors of most substances. A molecule is a group of atoms held together by chemical bonds (Brown, 2018). In this paper, we focus on small molecules, which typically have no more than 100 atoms and a low molecular weight under 1,500 Daltons (Lenci & Trabocchi, 2020). Small molecules perform many important functions, such as signaling in cellular biology (McNerney & Styczynski, 2018), pest control in agriculture (Burns et al., 2006), micronutrients in nutrition (Chen et al., 2022), and drug therapy in medicine (Lenci & Trabocchi, 2020). Given the importance of small molecules, it is essential to integrate LLMs into the study of small molecules to further advance their design or development. Molecules can be represented in multiple ways, such as SMILES strings, IUPAC names, and molecular formulas. SMILES strings use a sequence of symbols to encode the 2D structures of molecules (Weininger, 1988). A molecule can have multiple SMILES strings; a canonical SMILES for the molecule is unique and deterministic. For example, the canonical SMILES representation of glucose is C(C1C(C(C(C(O1)O)O)O)O)O. Molecular formulas represent a molecule by enumerating the type and number of atoms in the molecule (Solomons et al., 2022). For example, the molecular formula for glucose is C6H12O6. IUPAC names are formal names based on natural language elements, which follow the systematic rules set by the International Union of Preferred and Applied Chemistry (IUPAC) (Favre & Powell, 2014). These names are derived from the structures and functional groups of molecules, and are intended to be humanreadable. For example, the IUPAC name for glucose is (3R,4S,5S,6R)-6-(hydroxymethyl)oxane-2,3,4,5-tetrol. Molecules are one of the fundamental units of chemistry that participate in reactions (Brown, 2018). A reaction is a process which converts input molecules (*reactants*) into output molecules (*products*) through the breaking and forming of chemical bonds. Other
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e45ecbad-a102-4781-bb2f-1372b2903dd6
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 3. Preliminaries & Powell, 2014). These names are derived from the structures and functional groups of molecules, and are intended to be humanreadable. For example, the IUPAC name for glucose is (3R,4S,5S,6R)-6-(hydroxymethyl)oxane-2,3,4,5-tetrol. Molecules are one of the fundamental units of chemistry that participate in reactions (Brown, 2018). A reaction is a process which converts input molecules (*reactants*) into output molecules (*products*) through the breaking and forming of chemical bonds. Other molecules (*reagents*) may be present to enhance or facilitate the reaction.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
eb1871e4-44b9-4650-9882-4aab4b1b13c8
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct SMolInstruct is a large-scale instruction tuning dataset that centers around small molecules. It contains a total of 14 chemistry tasks, as illustrated in Figure 1. (1) We include four name conversion tasks, namely converting IUPAC to molecular formula (NC-I2F), converting IUPAC to SMILES (NC-I2S), converting SMILES to molecular formula (NC-S2F), and converting SMILES to IUPAC (NC-S2I). They are designed to enable deep understanding of molecular structures and representations, which should serve as the fundamental knowledge for chemistry LLMs. (2) Additionally, six property prediction tasks are integrated, including PP-ESOL for water solubility (Mobley & Guthrie, 2014), PP-Lipo for octanol/water distribution coefficient (Poole & Poole, 2003), PP-BBBP for blood-brain barrier penetration (Martins et al., 2012), PP-ClinTox for toxicity to human body (Gayvert et al., 2016), PP-HIV for HIV replication inhibition (hiv), and PP-SIDER for side effects of drugs (Kuhn et al., 2015). These involved properties are crucial especially for drug development. (3) Two tasks focus on the textual descriptions of molecules: molecule captioning (MC) is to generate a textual description of a given molecule, and molecule generation (MG) | Task | Task abbr. | #Train | #Valid | #Test | #All | Qry. | Resp. | |-------------------------------------------|--------------|----------|----------|---------|--------|--------|---------| | Name Conversion | | | | | | | | | . Data
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cdcc916b-4d36-4a74-815e-5bc65e661e55
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | Resp. | |-------------------------------------------|--------------|----------|----------|---------|--------|--------|---------| | Name Conversion | | | | | | | | | . Data Source: PubChem | | | | | | | | | IUPAC to Molecular Formula | NC-I2F | | | | | | | | 300 | , | 000 | 1 | , | 497 | 2 | , | | IUPAC to SMILES | NC-I2S | | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
01f9114c-37dd-49e3-84bb-dff1bd74fd57
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | 497 | 2 | , | | IUPAC to SMILES | NC-I2S | | | | | | | | 300 | , | 000 | 1 | , | 497 | 2 | , | | SMILES to Molecular Formula | NC-S2F | | | | | | | | 300 | , | 000 | 1 | , | 497 | 2 | , | | SMILES to IUPAC | NC-S2I | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ea80a3ed-ca8a-49d2-a329-4f1708a94591
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | , | 497 | 2 | , | | SMILES to IUPAC | NC-S2I | | | | | | | | 300 | , | 000 | 1 | , | 497 | 2 | , | | Property Prediction | | | | | | | | | . Data Source: MoleculeNet | | | | | | | | | ESOL | PP-ESOL | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fec25545-0211-4741-8b25-fba5efb61ea6
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | | | | | | ESOL | PP-ESOL | | | | | | | | 888 | 111 | 112 | 1 | , | 111 | 43 | 22 | | Lipo | PP-Lipo | | | | | | | | 3 | , | 360 | 420 | 420 | 4 | , | 200 | | BBBP | PP-BBBP | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
36fe5640-afc5-4702-8ddb-3bf851120326
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | , | 360 | 420 | 420 | 4 | , | 200 | | BBBP | PP-BBBP | | | | | | | | 1 | , | 569 | 196 | 197 | 1 | , | 962 | | ClinTox | PP-ClinTox | | | | | | | | 1 | , | 145 | 143 | 144 | 1 | , | 432 | | HIV | PP-HIV
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b24246b2-8063-4547-be52-7b1c5957f1a9
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | , | 145 | 143 | 144 | 1 | , | 432 | | HIV | PP-HIV | | | | | | | | 32 | , | 900 | 4 | , | 112 | 4 | , | | SIDER | PP-SIDER | | | | | | | | 22 | , | 820 | 2 | , | 860 | 2 | , | | Molecule Description
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e4dc2fe5-cac8-460a-be0c-50f3afbbd854
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | | 22 | , | 820 | 2 | , | 860 | 2 | , | | Molecule Description | | | | | | | | | . Data Source: Mol-Instructions, ChEBI-20 | | | | | | | | | Molecule Captioning | MC | | | | | | | | 56 | , | 502 | 1 | , | 269 | 2 | , | | Molecule Generation
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
158bec1c-05b6-47b2-b925-a5d895c3885f
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | | 56 | , | 502 | 1 | , | 269 | 2 | , | | Molecule Generation | MG | | | | | | | | 56 | , | 502 | 1 | , | 269 | 2 | , | | Chemical Reaction | | | | | | | | | . Data Source: USPTO-full | | | | | | | | | Forward Synthesis
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ccff9bf6-9ba7-40c0-af3e-6629c48e5c8b
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | | . Data Source: USPTO-full | | | | | | | | | Forward Synthesis | FS | | | | | | | | 988 | , | 626 | 2 | , | 086 | 4 | , | | Retrosynthesis | RS | | | | | | | | 942 | , | 065 | 2 | , | 094 | 4 | , | | Overall
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8c504c51-5025-4c8e-84e8-73621cf99894
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct | | | | 942 | , | 065 | 2 | , | 094 | 4 | , | | Overall | | | | | | | | | 3 | , | 306 | , | 377 | 20 | , | 548 | is to generate a molecule based on the given textual description. They require comprehensive understanding of molecules - their structures and properties, from their textual descriptions. They also bridge the gap between natural language and molecules. (4) Lastly, two tasks revolve around chemical reaction knowledge. Forward synthesis (FS) aims to predict potential products from reactants and reagents, and retrosynthesis (RS) involves predicting potential reactants given a product. These tasks play vital roles in real-world applications (Coley et al., 2018). For example, retrosynthesis is essential for synthesis planning, while forward synthesis is used to validate retrosynthetic suggestions. Table 1 shows the statistics of SMolInstruct. It contains 3.4M samples. Each sample is a query-response pair, where the query describes a task and any task-specific information (e.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e7cf624a-ff45-475a-b5b1-0ef415c67f24
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4. Smolinstruct 4.1. Overview Of Smolinstruct . Forward synthesis (FS) aims to predict potential products from reactants and reagents, and retrosynthesis (RS) involves predicting potential reactants given a product. These tasks play vital roles in real-world applications (Coley et al., 2018). For example, retrosynthesis is essential for synthesis planning, while forward synthesis is used to validate retrosynthetic suggestions. Table 1 shows the statistics of SMolInstruct. It contains 3.4M samples. Each sample is a query-response pair, where the query describes a task and any task-specific information (e.g., input molecule, textual description, etc.), and the response is a sentence containing the answer to the queried task. For all the tasks, unless explicitly defined in the tasks (NC-I2F, NC-I2S, NC-S2F, and NC-S2I), we use SMILES as the representation for molecules.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4881848c-437a-4548-a24d-d5e17d48da0c
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4.2. Smolinstruct Construction Details We construct the SMolInstruct dataset by following a four-step pipeline: data collection, quality control, data splitting, and instruction construction. Detailed explanations are as follows. Data Collection. After consulting domain experts and carefully pinpointing the set of meaningful tasks (summarized in Section 4.1), we collect data for these tasks from various sources, as listed in Table 1. Specifically, for the name conversion tasks (NC-I2F, NC-I2S, NC-S2F, and NC- S2I), we leverage PubChem2 (Kim et al., 2019), one of the most comprehensive molecule databases. Within this database, we randomly select a large set of molecule entries, and extract their IUPAC names, SMILES representations, and molecular formulas. This obtained data is then re-organized as input-output pairs for the tasks. For molecular description-related tasks (MC and MG), we utilize a combination of ChEBI-20 (Edwards et al., 2021; 2022) and Mol-Instructions (Fang et al., 2023), as they both contain high-quality molecule-text paired data. For property prediction tasks (PP-ESOL, PP-Lipo, PP-BBBP, PP-ClinTox, PP-HIV, and PP-SIDER), we employ the well-established MoleculeNet datasets (Wu et al., 2018). We carefully select 6 datasets from MoleculeNet that represent the most essential properties for small molecule chemistry. For chemical reaction tasks (FS and RS), we collect the reaction data from USPTO-full (Lowe, 2017), which is an extensive collection encompassing over 1M reaction samples extracted from U.S. patents. All the aforementioned datasets are also widely used in previous studies (He et al., 2021; Zhong et al., 2022; Edwards et al., 2022; Irwin et al., 2022; Chen et al., 2023; Zhou et al., 2023). 2https://pubchem.ncbi.nlm.nih.gov/ Quality Control. To guarantee high quality, we apply rigorous scrutiny. The collected data contains many problematic and low-quality samples, which can be roughly categorized into the following three types, along with our curation methods: (1) Chemically invalid SMILES. Numerous SMILES strings are chemically invalid (e.g., deviating from the SMILES grammar, or
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e275e13e-5001-4ed5-9671-b364eb5f5fde
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4.2. Smolinstruct Construction Details al., 2022; Edwards et al., 2022; Irwin et al., 2022; Chen et al., 2023; Zhou et al., 2023). 2https://pubchem.ncbi.nlm.nih.gov/ Quality Control. To guarantee high quality, we apply rigorous scrutiny. The collected data contains many problematic and low-quality samples, which can be roughly categorized into the following three types, along with our curation methods: (1) Chemically invalid SMILES. Numerous SMILES strings are chemically invalid (e.g., deviating from the SMILES grammar, or violating chemical valence). To address this issue, we employ RDKit (rdk), a widely used toolkit for cheminformatics, to parse molecules and detect errors. (2) Wrong or inaccurate information. Based on manual check, we observed wrong and inaccurate information recorded in the data. For instance, within the USPTO-full dataset (Lowe, 2017), we identify and correct mislabeled reactants and reagents in chemical reactions by comparing their atom mappings with products. For the MC and MG tasks, we filter out those textual descriptions that lack pertinent, molecule-specific information, with a set of rules based on wording patterns, lengths and keywords. For PP-SIDER, we eliminate disorders with ambiguous names that could impede the creation of precise and comprehensible instructions. (3) Duplicated samples. They prevail in the data, and we carefully detect and remove them. Data Splitting. Data splitting for multi-task datasets requires careful handling in order to avoid data leakage across tasks. For instance, FS and RS are reverse tasks, so data leakage occurs when the training set contains an FS sample for a certain chemical reaction and the test set has an RS sample for the same reaction. This can lead to biased evaluation. Therefore, we meticulously identify sample pairs across related tasks (FS and RS, MC and MG, and the four NC tasks) that correspond to the same molecules/reactions, and ensure that matched samples are placed together in either training or evaluation set. Moreover, some samples may share the same input but have different outputs. For instance, in the RS task, one product (the same input) may be synthesized from multiple sets of reactants (different outputs). If these samples are placed into both training and test set, it may lead to exaggerated results. Therefore we ensure that samples with identical inputs are placed together either in
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e0e610e8-8ce4-4057-b69d-e5ca6031043b
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4.2. Smolinstruct Construction Details biased evaluation. Therefore, we meticulously identify sample pairs across related tasks (FS and RS, MC and MG, and the four NC tasks) that correspond to the same molecules/reactions, and ensure that matched samples are placed together in either training or evaluation set. Moreover, some samples may share the same input but have different outputs. For instance, in the RS task, one product (the same input) may be synthesized from multiple sets of reactants (different outputs). If these samples are placed into both training and test set, it may lead to exaggerated results. Therefore we ensure that samples with identical inputs are placed together either in or outside of the test set. Additionally, to achieve fair comparisons with Mol-instructions (Fang et al., 2023), for tasks shared between the two datasets (MC, MG, FS, and RS), we ensure that their training examples are not included in our test set, allowing for a direct evaluation of their models on our test set. Following these necessary limitations, samples are randomly spitted into training/validation/test set, except for PP task samples that undergo a scaffold splitting following the canonical method (Wu et al., 2018). Table 1 shows the statistics for each split. Instruction Creation. To create query-response textual pairs for instruction tuning, we manually craft several templates, each including a query and a corresponding response, and apply GPT-4 to rephrase them. Unlike those in (Fang et al., 2023) which consist of highly formatted queries (containing three explicitly labeled parts namely instruction, input, and output) and answer-only responses (e.g., responses for FS and RS only contain answer SMILES alone, without any natural text), our templates exhibit a more natural and diverse set of formats in both queries and responses, allowing for more variations and naturalness in input-output interactions. Moreover, all the SMILES representations are canonicalized, establishing a standardized data format. In light of the dataset's inclusion of multi-type sequences (SMILES, molecular formula, numbers, etc.) beyond natural language text alone, we utilize special tags to encapsulate corresponding segments (e.g., <SMILES>...</SMILES> for SMILES, <MOLFORMULA>...</MOLFORMULA> for molecular formula, <NUMBER>...</NUMBER> for numbers). This design does not only explicitly inform models about the information types within the tagged content
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2d4f88cf-af89-49de-9af3-5d0253f05c1e
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4.2. Smolinstruct Construction Details . Moreover, all the SMILES representations are canonicalized, establishing a standardized data format. In light of the dataset's inclusion of multi-type sequences (SMILES, molecular formula, numbers, etc.) beyond natural language text alone, we utilize special tags to encapsulate corresponding segments (e.g., <SMILES>...</SMILES> for SMILES, <MOLFORMULA>...</MOLFORMULA> for molecular formula, <NUMBER>...</NUMBER> for numbers). This design does not only explicitly inform models about the information types within the tagged content, but also facilitate answer extraction during evaluation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c2c2af24-75e1-4f17-ae89-8752dbc2bbd7
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 4.3. Merits Of Smolinstruct Compared to previous work (Fang et al., 2023; Liang et al., 2023; Ye et al., 2023), SMolInstruct stands out in several key aspects: (1) **Large-Scale**. SMolInstruct consists of 3.4M distinct samples and 1.6M distinct molecules, with a diverse range of sizes, structures, and properties (see Figure 5 in Appendix C), showcasing an extensive coverage of diverse chemical knowledge. (2) **Comprehensive**. SMolInstruct contains 4 types of chemical tasks (14 tasks in total), emerging as the most comprehensive instruction tuning dataset for small molecules. Notably, the tasks are meticulously selected to build a strong chemistry foundation. (3) **High-Quality**. Rigorous processing steps have been implemented to exclude problematic and low-quality samples. Along with careful data splitting and canonicalization of SMILES representations SMolInstruct stands as a high-quality resource valuable for future research.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d6358a59-1fe8-4228-8a3d-64d48997a867
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5. Experiments 5.1. Our Llasmol Models By fine-tuning base models on the proposed SMolInstruct dataset, we create LLMs capable of performing chemistry tasks, which we name LlaSMol (Large language models on small **mol**ecules). Specifically, we extensively consider four different LLMs as our base models, namely Galactica 6.7B (Taylor et al., 2022), Llama 2 (Touvron et al., 2023b) 7B, Code Llama (Roziere et al., 2023) 7B, and Mistral (Jiang et al., 2023) 7B, where Galactica is trained for scientific applications and has already been exposed to chemistry-related data during its pretraining, Llama 2 and Mistral are general-purpose LLMs, while Code Llama is based on Llama 2 and trained for code. We conduct instruction tuning on the proposed SMolInstruct dataset, and name the resulting models as LlaSMolGalactica, LlaSMolLlama 2, LlaSMolCode Llama, and LlaSMolMistral, respectively. All the LlaSMol models are trained with LoRA (Hu et al., 2022), which is applied to all weight matrices in the self-attention and feedforward neural network (FFN) modules with lora r and lora alpha set to 16. The fine-tuning process utilizes the Huggingface Transformers library (Wolf et al., 2020). Training spans three epochs, employing the 8-bit AdamW optimizer, a learning rate of 1e-4, and a cosine scheduler. The input length for training is set to 512, which covers 99.7% of the samples. During inference, we adopt beam search as the generation strategy for simplicity. Details can be found in Appendix A.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d1f013a2-5d24-4c42-8543-af076ac67d3b
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.2. Experimental Setup Compared Models. We compare our LlaSMol models with two types of models: (1) LLMs without fine-tuning on **SMolInstruct**. This type includes our four base models, namely Galactica (Taylor et al., 2022), Llama 2 (Touvron et al., 2023b), Code Llama (Roziere et al., 2023), Mistral (Jiang et al., 2023). we also benchmark against GPT-4 (Achiam et al., 2023), the current state-of-the-art (SoTA) LLM3. For Llama 2, Code Llama, and Mistral, we use 1-shot, due to their poor instruction following ability; for GPT-4, we report its results under a zero-shot setting, as GPT-4 performs best on this setting in our experiments (Appendix B). We also include Molinst (Fang et al., 2023), a Llama 2 model fine-tuned on the Mol- Instructions dataset (Fang et al., 2023), which shares the tasks of MC, MG, FS, and RS with SMolInstruct. (2) **SoTA task-specific models.** To provide a comprehensive view of LlaSMol's performance, we present results from SoTA task-specific models. For NC-I2S and NC-S2I, we compare with STOUT (Rajan et al., 2021), an encoderdecoder model trained on SMILES-IUPAC name paired data. For NC-S2F, a task achievable with a fixed algorithm, we implement a program with RDKit (rdk), a widely used Python toolkit for cheminformatics, and report its results. For NC-I2F where no dedicated models exist, we construct a baseline called STOUT+RDKit by aggregating STOUT for I2S conversion and RDKit for S2F conversion. For the PP tasks, our compared model is Uni-Mol (Zhou et al., 2023). It incorporates molecular 3D representations and follows a pretraining and fine-tuning paradigm. Following its original settings, we fine-tune the model on our SMolInstruct dataset with its pretrained checkpoint. In the case of MC and MG, we compare with MolT5 (Edwards et al., 2022) and directly use their released checkpoint. The
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8d3b061c-aea4-4419-983d-11d698b491db
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.2. Experimental Setup no dedicated models exist, we construct a baseline called STOUT+RDKit by aggregating STOUT for I2S conversion and RDKit for S2F conversion. For the PP tasks, our compared model is Uni-Mol (Zhou et al., 2023). It incorporates molecular 3D representations and follows a pretraining and fine-tuning paradigm. Following its original settings, we fine-tune the model on our SMolInstruct dataset with its pretrained checkpoint. In the case of MC and MG, we compare with MolT5 (Edwards et al., 2022) and directly use their released checkpoint. The reasons why we do not use our re-trained model are: (1) we were unable to reproduce results close to those reported in the paper as no original code was provided; and (2) we take great care to ensure that our test set is devoid of training examples used by MolT5, ensuring fairness in the evaluation. Lastly, regarding FS and RS, we re-train RSMILES (Zhong et al., 2022) and Molecular Transformer (Schwaller et al., 2019) for the two tasks, respectively, following their reported settings. Both of the models are transformer encoder-decoder models (Vaswani et al., 2017), specifically adapted for the FS and RS tasks. Evaluation Metrics. We employ evaluation metrics commonly used in previous work (Schwaller et al., 2019; Zhong et al., 2022; Fang et al., 2023; Zhou et al., 2023; Chen et al., 2023), which include: (1) **Exact Match (EM)**. This metric indicates the proportion of predicted results that exactly match the gold standards in the dataset. Each task may adopt a different definition of exact match, which can be found in Appendix A. (2) Fingerprint-based Tanimoto Similarity (FTS). This metric quantifies structural similarities between molecules by calculating the Tanimoto similarities between their Morgan fingerprints (Morgan, 1965). (3) METEOR score. For the MC task, we report the commonly-used text-based metric called METEOR score that offers a comprehensive evaluation by considering both exact matches and semantic similarity. (Lavie & Agarwal, 2007) (4) Root Mean Square Error (RMSE). Specifically used for the PP- ESOL and PP-Lipo task, this metric quantifies the accuracy
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
89a92f19-3f91-4e8b-8f23-43aa4033bd19
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.2. Experimental Setup (2) Fingerprint-based Tanimoto Similarity (FTS). This metric quantifies structural similarities between molecules by calculating the Tanimoto similarities between their Morgan fingerprints (Morgan, 1965). (3) METEOR score. For the MC task, we report the commonly-used text-based metric called METEOR score that offers a comprehensive evaluation by considering both exact matches and semantic similarity. (Lavie & Agarwal, 2007) (4) Root Mean Square Error (RMSE). Specifically used for the PP- ESOL and PP-Lipo task, this metric quantifies the accuracy of predicted values by measuring the square root of the average squared differences between predicted and actual values. (5) **Accuracy (Acc)**. Applied in the binary classification tasks (PP-BBBP, PP-ClinTox, PP-HIV, and PP-SIDER), this metric is the ratio of correct predictions. (6) **Validity (Valid)**. For tasks where the outputs are SMILES representations (NC-I2S, MG, FS, and RS), this metric presents the ratio of valid predictions that follow the grammar of SMILES and satisfy chemical valence rules. For all the above metrics except RMSE, the larger, the better. For more details regarding training, inference, and evaluation setups, please refer to Appendix A.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
70a7005f-0f05-4df1-a34a-e14429560fd9
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results Table 2 and 3 show the performance of different models on all tasks in SMolInstruct. We make the following key observations: (1) Among all the LLMs, our LlaSMol models demonstrate the best performance on all tasks, underscoring the effectiveness of the proposed SMolInstruct dataset and the benefits of fine-tuning. When compared | | | | | | | | | NC | PP | |--------------------------------------|-----|-------|-----|------|------|------|---------|------|-------| | I2F | I2S | S2F | S2I | ESOL | Lipo | BBBP | Clintox | HIV | SIDER | | Model | | | | | | | | | | | EM | EM | Valid | EM | EM | RMSE | | | | | | ↓ | | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ad59196c-827e-4a0f-aa72-0051a6a680eb
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results | EM | Valid | EM | EM | RMSE | | | | | | ↓ | | | | | | | | | | | RMSE | | | | | | | | | | | ↓ | | | | | | | | | | | Acc | Acc | Acc | Acc | | | | | | | | Task-Specific, Non-LLM Based Models | | | | | | | | | | | SoTA
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f5c248b2-2ace-4b3c-9bdf-51509d34d2c2
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results | | | | | | | Task-Specific, Non-LLM Based Models | | | | | | | | | | | SoTA | | | | | | | | | | | 97 | . | 9 | 73 | . | 5 | 99 | . | 4 | 100 | | Existing LLMs without fine-tuning on | | | | | | | | | | | SMolInstruct | | | | | | | | | | | GPT-4 | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a3decf8d-22ea-46a1-bb48-489a38dd8a1f
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results | | | | | | | | | | | GPT-4 | | | | | | | | | | | 21 | . | 8 | 3 | . | 6 | 84 | . | 2 | 16 | | Galactica | | | | | | | | | | | 24 | . | 7 | 9 | . | 7 | 95 | . | 6 | 8 | | Lama 2 | | | | | | | | | | | 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a78179ad-478c-47aa-9498-04297a9c6122
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results 7 | 95 | . | 6 | 8 | | Lama 2 | | | | | | | | | | | 0 | . | 9 | 0 | . | 0 | 18 | . | 1 | 0 | | Code Llama | | | | | | | | | | | 0 | . | 4 | 0 | . | 0 | 81 | . | 0 | 0 | | Mistral | | | | | | | | | | | 0 | .
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cdfb7c49-6522-4686-88ec-8f0cc013aba9
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results ral | | | | | | | | | | | 0 | . | 2 | 0 | . | 0 | 40 | . | 3 | 0 | | Molinst (instruction-tuned) | | | | | | | | | | | 0 | . | 0 | 0 | . | 0 | 96 | . | 2 | 0 | | Our LlaSMol Series | | | | | | | | | | | LlaSMol | | | | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
91c8e6e4-9613-4136-85c9-fe879f7886e5
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results | | | | | | | LlaSMol | | | | | | | | | | | Galactica | | | | | | | | | | | 81 | . | 0 | 57 | . | 7 | 99 | . | 6 | 90 | | LlaSMol | | | | | | | | | | | Llama 2 | | | | | | | | | | | 74
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
425aa4f5-ffdc-4435-b56a-f9a29228b3c6
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results | | | Llama 2 | | | | | | | | | | | 74 | . | 6 | 41 | . | 8 | 99 | . | 1 | 86 | | LlaSMol | | | | | | | | | | | Code Llama | | | | | | | | | | | 79 | . | 2 | 49 | . | 9 | 99 | . | 3 | 91 | | LlaSMol | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a9097490-49ec-4b01-b6d1-ae98205059eb
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results | . | 2 | 49 | . | 9 | 99 | . | 3 | 91 | | LlaSMol | | | | | | | | | | | Mistral | | | | | | | | | | | 89 | . | 8 | 70 | . | 1 | 99 | . | 6 | 94 | to our base models (Galactica, Llama 2, Code Llama, and Mistral), LlaSMol models exhibit remarkable performance enhancement after fine-tuning on SMolInstruct. This highlights SMolInstruct's effectiveness in enhancing models' understanding towards molecular representations and task-related knowledge, and signifies the effective learning of chemistry-related tasks by LLMs. Also, LlaSMol outperforms GPT-4 on all the tasks, despite GPT-4's larger parameter size. Notably, LlaSMolLlama 2, which has the same base model and LoRA setting as Molinst (Fang et al., 2023), surpasses it even on the shared training tasks (MC, MG, FS, and RS). It shows the benefits of our expansive and high-quality dataset compared to Mol-Instructions.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
74eb6525-c4d7-45fb-9759-1ad7db22e38d
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.3. Main Results 's effectiveness in enhancing models' understanding towards molecular representations and task-related knowledge, and signifies the effective learning of chemistry-related tasks by LLMs. Also, LlaSMol outperforms GPT-4 on all the tasks, despite GPT-4's larger parameter size. Notably, LlaSMolLlama 2, which has the same base model and LoRA setting as Molinst (Fang et al., 2023), surpasses it even on the shared training tasks (MC, MG, FS, and RS). It shows the benefits of our expansive and high-quality dataset compared to Mol-Instructions. (2) Our four LlaSMol models show substantial differences in their performance, emphasizing the significant impact of base models on downstream tasks. Despite sharing identical training and inference settings, as well as comparable model sizes, LlaSMolMistral consistently outperforms LlaSMolLlama 2 by a substantial margin, highlighting Mistral's great potential on chemistry tasks. In addition, LlaSMolCode Llama exhibits better performance than LlaSMolLlama 2, indicating a potential synergy between programming language knowledge in Code Llama and molecular representations. For instance, understanding code grammar may have positively impacted the learning of SMILES representation, which can be regarded as a well-defined coding language for molecules. Furthermore, LlaSMolGalactica outperforms LlaSMolLlama 2, and LlaSMolCode Llama in most cases, suggesting the benefits of pretraining on chemistryrelated documents.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cf54ba1a-2185-4556-9a41-c98b03be7dec
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## (3) Our Llasmol Models Exhibit Comparable Performance To Sota Models, Showing Great Potential As A chemistry foundation model. Even though LlaSMol models may not outperform these SoTA models on many tasks, the observed performance gap has markedly diminished compared to previous efforts (Fang et al., 2023). Notably, with only a small proportion of parameters tuned (approximately 40M, 0.59%) in the 7B models, LlaSMolMistral already achieves performance approaching or even surpassing the SoTA on several tasks such as NC-I2S, PP-SIDER, and MC. As further exploration in Section 5.4 will show that adding trainable parameters can lead to substantial performance boost, we anticipate that LLMs possess immense potential to surpass task-specific models through more extensive fine-tuning. Additionally, LlaSMol underscores the potential of a universal model capable of addressing multiple chemistry tasks. For detailed experimental results on more metrics, please refer to Appendix B.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1af97958-96b0-4cb9-8594-4f6b4d95f858
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.4. Influence Of Lora Modules And Trainable Parameters In this section, we investigate the influence of using different LoRA modules or different sizes of trainable parameters. We take LlaSMolLlama 2 as the basic setting and refer to it as LlaSMol in this section for simplicity. All the compared models are listed as follows, with the trainable parameter sizes and the ratios labeled in brackets: - **LlaSMol Lite** (8.4M, 0.12%): LoRA is applied on q proj and v proj of the attention modules. - **LlaSMol Attn** (16.8M, 0.25%): LoRA is applied on all the attention projection matrices (including q proj, k proj, v proj, o proj). - **LlaSMol FFN** (23.2M, 0.34%): LoRA is applied on Model MC MG FS RS METEOR EM FTS Valid EM FTS Valid EM FTS Valid Task-Specific, Non-LLM Based Models SoTA 0.515 31.6 73.2 95.3 78.7 92.2 100.0 47.0 77.5 99.7 Existing LLMs Without Fine-Tuning on SMolInstruct GPT-4 0.198 3.4 43.1 43.1 1.1 39.0 92.0 1.2 43.7 81.0 Galactica 0.050 0.0 11.6 94.7 0.0 25.8 91.3 0.0 34.6 93.0 Llama 2 0.150 0.0 4.8 93.6 0.0 13.7 97.7 0.0 27.5 87.7 Code Llama 0.143 0.0 8.5 95.2 0.0 15.8 99.6 0.0 25.3 97.1 Mistral 0.193 0.0 9.0 35.9 0.0 19.8 99.4 0.0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
500328cf-cbb9-4d4a-a4a0-90d8e428420a
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.4. Influence Of Lora Modules And Trainable Parameters 93.0 Llama 2 0.150 0.0 4.8 93.6 0.0 13.7 97.7 0.0 27.5 87.7 Code Llama 0.143 0.0 8.5 95.2 0.0 15.8 99.6 0.0 25.3 97.1 Mistral 0.193 0.0 9.0 35.9 0.0 19.8 99.4 0.0 24.2 98.0 Molinst (instruction-tuned) 0.124 6.0 43.6 84.8 2.1 31.6 99.8 5.7 48.0 97.8 Our LlaSMol Series LlaSMolGalactica 0.394 7.8 51.0 99.8 52.7 79.7 99.8 25.3 67.0 99.8 LlaSMolLlama 2 0.366 4.8 44.2 99.9 44.2 75.3 99.7 22.4 65.2 99.9 LlaSMolCode Llama 0.366 6.5 46.6 99.8 52.3 79.4 99.8 25.7 66.7 100.0 LlaSMolMistral 0.452 19.2 61.7 99.8 63.5 85.0 99.8 32.9 70.4 100.0 all the FFN projection matrices (including gate proj, down proj, up proj). - **LlaSMol** (40.0M, 0.59%): The basic setting. LoRA is applied on all the attention and FFN projection matrices. - **LlaSMol Plus** (171.0M, 2.48%): LoRA is applied on all the attention and FFN projection matrices, and lm head is set trainable. - **LlaSMol Large** (62.6M,
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f7e762a5-621b-4c5a-93c7-51b90a1df8b9
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 5.4. Influence Of Lora Modules And Trainable Parameters 0 all the FFN projection matrices (including gate proj, down proj, up proj). - **LlaSMol** (40.0M, 0.59%): The basic setting. LoRA is applied on all the attention and FFN projection matrices. - **LlaSMol Plus** (171.0M, 2.48%): LoRA is applied on all the attention and FFN projection matrices, and lm head is set trainable. - **LlaSMol Large** (62.6M, 0.48%): Unlike other models that is based on Llama 2 7B, this model takes Llama 2 13B as the base model. LoRA is applied on all the attention and FFN projection matrices, same as the basic setting. All these models are trained with the identical training configurations (as described in Section 5.1). We show the model performance on the forward synthesis (FS) task in Figure 2, and the results for other tasks can be found in Appendix B. The key observations are summarized as follows: (1) Progressing from LlaSMol Lite, LlaSMol Attn, LlaSMol FFN to LlaSMol, the incorporation of more LoRA modules during training leads to a significant performance enhancement. (2) When comparing LlaSMol and LlaSMol Large, we can tell that larger base models yield superior results. Furthermore, comparing LlaSMol Large and LlaSMol Plus, despite the considerably smaller trainable parameter size, the former still outperforms the latter. This indicates that performance is not solely determined by the trainable parameter size; rather, the inherent capability of the base model also plays a crucial role. In summary, refining the selection of LoRA modules to train and employing a larger, more sophisticated base model offers significant potential for further improvements in the performance of LLMs on chemistry tasks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f9002a5c-663e-4d91-afda-a101308fe30f
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## 6. Conclusion To improve the performance of LLMs for chemistry, this paper introduces a large-scale, comprehensive, high-quality instruction tuning benchmark, SMolInstruct. It comprises 14 meticulously chosen tasks that are highly relevant to real-world applications and over 3M carefully curated samples that undergo thorough processing to ensure exceptional quality. Based on SMolInstruct, we developed a series of LLMs named LlaSMol on four different base models, among which we find the mistry-based LlaSMolMistral achieves the best performance. Our comprehensive experimental results demonstrate LlaSMol's superiority over existing LLMs including GPT-4 on all the tasks as well as substantial potential for LLMs to surpass the SoTA task-specific models. We will release our code and data to facilitate future research and development.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3581efe2-a9db-4913-88f4-e97d933e8c4b
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## Impact Statements We anticipate that our proposed SMolInstruct dataset and models will contribute to future research on LLMs for chemistry. However, we acknowledge certain limitations of this work. Firstly, despite our best efforts to ensure the high quality of the SMolInstruct dataset, we cannot guarantee the absence of incorrect or harmful information. Secondly, our primary focus is on advancing LLMs for chemistry tasks; hence we have not evaluated the models' generalization abilities beyond those tasks or the models' safety risks (such as generating harmful content under adversarial attacks). While recognizing the importance of such issues, we leave this topic as future work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
27d900a9-19f3-4cb1-b1a2-52fefdee544d
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A. Details Of Experimental Setup In this section, we introduce the details of our experimental setups, including the training and inference details of our LlaSMol models and the compared models. We also give detailed explanations of the metrics used in Section 5.3, as well extra metrics that we will use in Appendix B.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ad4ae3dc-cf92-4fd4-b64f-95e2638a7ff6
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.1. Llasmol Models The base models used for developing LlaSMol are Galactica4 (Taylor et al., 2022), Llama 25 (Touvron et al., 2023b), Code Llama6 (Roziere et al., 2023) and Mistral7 (Jiang et al., 2023). We conduct instruction tuning on our SMolInstruct, and the resulting models are called named as LlaSMolGalactica, LlaSMolLlama 2, LlaSMolCode Llama, and LlaSMolMistral, respectively. Expect for being based on different base models, their training and evaluation configurations are identical, as described as follows. We used LoRA (Hu et al., 2022) during training, which is applied to all linear layers in the self-attention and FFN modules with lora r and lora alpha set to 16. With the 8-bit AdamW optimizer, a learning rate of 1e-4, and a cosine scheduler, we train each model for three epochs. The input length is set to 512, and sequences longer than 512 are truncated. During inference, we adopt beam search as the generation strategy for simplicity. Due to the need of evaluations on the top-k predicted answers (as in Appendix B, where k varies for different tasks, we generate different numbers of sequences for different tasks by setting the num return sequences argument in the Huggingface Transformers library (Wolf et al., 2020). Specifically, it is set to 5 for NC-I2S, NC-S2I, FS, and MG; 3 for NC-I2F and NC-S2F; 1 for all the PP tasks; and 10 for RS. The beam size is set to num return sequences + 3 for all the tasks. The maximum number of new generated tokens is set to 1024.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
494305c2-4911-479a-90c8-3d1bdffe4ad0
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.2. Compared Models We introduce each of the compared models in details, including their training (if applicable) and inference process.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6d07b906-eac2-4052-ae1d-c545fe9c8280
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.2.1. Gpt-4 General Template You are an expert chemist. Given the SMILES representation of reactants and reagents, your task is to predict the potential product using your chemical reaction knowledge. Task-Specific Template The input contains both reactants and reagents, and different reactants and reagents are separated by ".". Your reply should contain only the SMILES representation of the predicted product and no other text. Your reply must be valid and chemically reasonable. Reactants and reagents SMILES: C1CCOC1.CCN(CC)CC.CS(=O)(=O)Cl.CS(C)=O. N[C@@H]1CC2=CC=C(CN3C=C(CO)C(C(F)(F)F)=N3)C=C2C1 ICL Product SMILES: CS(=O)(=O)N[C@@H]1CC2=CC=C(CN3C=C(CO)C(C(F)(F)F)=N3)C= C2C1 Reactants and reagents SMILES: CCN.CN1C=CC=C1C=O Question Product SMILES: GPT-4 (Achiam et al., 2023) is the SoTA LLM to date. We use the model versioned as gpt-4-0613 and evaluate it on 4https://huggingface.co/facebook/galactica-6.7b 5meta-llama/Llama-2-7b-hf 6codellama/CodeLlama-7b-hf 7mistralai/Mistral-7B-v0.1 500 samples from test set via OpenAI's API. Since GPT-4 is not fine-tuned on our dataset and thus is not familiar with the flexible queries, to ensure it generates answers in an expected format, we follow the prompt format proposed in (Guo et al., 2023) and create a query template for each of the tasks. The template for FS is shown in Figure 3. It contains 4 parts: (1) **General template** describes the task in a general way. (2) **Task-specific template** describes the detailed content requirements and format requirements for the specific task. (3) **ICL** contains
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b1fe4107-208f-4193-8223-2d6f84c65b21
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.2.1. Gpt-4 AI's API. Since GPT-4 is not fine-tuned on our dataset and thus is not familiar with the flexible queries, to ensure it generates answers in an expected format, we follow the prompt format proposed in (Guo et al., 2023) and create a query template for each of the tasks. The template for FS is shown in Figure 3. It contains 4 parts: (1) **General template** describes the task in a general way. (2) **Task-specific template** describes the detailed content requirements and format requirements for the specific task. (3) **ICL** contains the in-context learning examples. It provides examples in the format of <input title>: <input content>\n <output title>: <output content>\n, where <input title> and <output title> serve as straightforward prompts to the input and output content. This design make the queried task more clear. (4) **Question** has the same format as ICL, with <output content> being empty for the model to generate. We conduct both s-shot evaluations, where s = 0, 1, 3, 5 is the number of provided ICL examples. For 0-shot evaluation, the ICL part in the template is removed from the queries. In k-shot evaluation, for each sample,the ICL examples are randomly selected from the training set. The results of these settings are shown in Appendix B, which reals that these settings' performance is not consistent across all the tasks. Since 0-shot shows the best performance on most tasks, we report its results in Section 5.3. In the evaluations, we use the default generation strategy set in the API. To generate the same number of results for each sample (as described in Appendix A.1), we set the argument n in the API, which controls the number of output sequences. GPT-4 can always follow the formatted instructions introduced above, so we do not bother to extract the answers from its outputs, but directly use its outputs as the predicted answers.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8313c951-d528-4a9e-bd14-2b600d8ad678
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.2.2. Galactica Galactica (Taylor et al., 2022) is a LLM without undergoing instruction tuning. To evaluate it on SMolInstruct, we follow the instructions in the paper (Taylor et al., 2022) and the repository8 to create the queries for each task. We use zero-shot setting, as its official instruction does not suggest using few-shot setting. The generation configuration is set identical to that of our LlaSMol models (Appendix A.1). Galatica's outputs may contain extra text other than the expected answers. Therefore, with heuristic rules and regular expression matching, we implement a program to extract the answers from the outputs of the models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
03e88d12-b1fe-4539-979e-89f8acf6650b
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.2.3. Llama 2, Code Llama, And Mistral For our base models (Llama 2, Code Llama, and mistral), since they are not trained on SMolInstruct and have not seen the diverse queries in the dataset, we use the same query templates as those used for GPT-4 (Appendix A.2.1). We use the one-shot setting for them, as it would improve models' abiltity to follow the instructions and generate answers in a more formated way. In addition, the generation configuration (including beam size, output sequence numbers, etc) is set identical to that of our LlaSMol models (Appendix A.1). Although we try our best to make the output format as clear as possible in the queries, these three models still cannot follow the instructions and their outputs are in various formats. By heuristic rules and regular expression matching, we implement a program to extract the answers from the outputs of each of the models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
abf1cb96-50c7-482c-927e-ad4bbeefad75
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.2.4. Molinst Molinst is a Llama 2 model fine-tuned on Mol-Instructions (Fang et al., 2023). On the shared tasks between Mol-Instructions and SMolInstruct (including MC, MG, FS, and RS), we directly use the query templates from Mol-Instructions to achieve better results. On other tasks, we create one query template for each task following the style of Mol-Instructions. We use zero-shot on its evaluation, as Mol-Instructions does not contain any one-shot use case. The outputs of Molinst may also contain extra text other than the expected answers, especially on its unseen tasks. Thus, we also implement a program to extract the answers.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
19082966-bad6-4e2b-8df6-78b0dfb7c6da
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.3. Task-Specific, Non-Llm Based Sota Models A.3.1. Stout For Nc-I2S And Nc-S2I STOUT is a encoder-decoder model trained on SMILES and IUPAC name paired data, and it is capable of conducting the NC-2iS and NC-S2I tasks. Due to the lack of training code, we cannot re-train it on our dataset, and directly use their released model checkpoint9. Since it may have encounter some test samples of SMolInstruct during training, the evaluation results in Table 2 may be higher than its real performance.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
20bbc482-44a3-4cc2-b4f0-bd04e3bbae61
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.3.2. Rdkit For Nc-S2F The NC-S2F task can be easily achieved with a fixed algorithm by parsing the input SMILES representation and counting the numbers of atoms. We implement a program with RDKit (a widely used Python toolkit for processing molecules and other chemical information) and report its results.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fc98a248-73c5-4123-b8a2-7823f75d2ff9
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.3.3. Stout+Rdkit For Nc-I2F Since there are no dedicated models for the NC-I2F task, we combine STOUT for the IUPAC to SMILES conversion and RDKIT for the SMILES to molecular formula conversion. Specifically, we feed the input IUPAC name into STOUT to get the corresponding SMILES, and then used the RDKit-based program to get the molecular formula based on the SMILES.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ad401bcf-5a61-4d8c-b38e-3be81b5fe530
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.3.4. Uni-Mol For All The Pp Tasks Uni-Mol (Zhou et al., 2023) is a framework for learning useful representations of molecules based on their 3D conformations. Uni-Mol can be fine-tuned to perform property prediction based on these representations. Using the pretrained model weights, hyperparameters, and code supplied by the authors, we fine-tuned Uni-Mol models for chemical property prediction tasks on our dataset. For SIDER property prediction, we used 20 as the number of targets for multi-target classification, as our dataset focused on a specific subset of 20 SIDER targets. We generated results from Uni-Mol using the code provided by the authors and evaluated according to the metrics in Section 5.2. The data split used for fine-tuning, validation, and testing property prediction tasks is different from the one used in the Uni-Mol paper, so the performance may not match exactly.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b02a83c0-0910-494c-a1ba-967c86ec00d3
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.3.5. Molt5 For Mc And Mg MolT5 (Edwards et al., 2022) is a T5 model for translating between molecules and natural language. We use the already fine-tuned MolT5-large checkpoints provided by the authors for both molecule generation and molecule description. We generate predictions on our test set using beam search with 5 beams, following the example code provided by the authors. For input, the molecule description model is provided a SMILES string and the molecule generation model is provided a natural language description. For molecule description, we generated only one result. For molecule generation, we set the number of sequences to return to 5. We evaluate our test set results according to the metrics in Section 5.2. The data used for testing is different from the data used by the MolT5 paper, so the performance may be different. Please note that our test set does not overlap with the MolT5 training set.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d9b1304d-b14b-4966-86c6-72d0ca3c5b20
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.3.6. Rsmiles For Fs And Rs RSMILES (Zhong et al., 2022) is a transformer model trained on pairs of SMILES strings aligned to minimize their edit distance. RSMILES translates aligned SMILES strings of reactants and reagents into products for the FS task, and products into reactants for the RS task. Following the settings described in the paper of RSMILES, we augment and align each pair of SMILES strings in our training data for 5 times. For the FS task, we adopt a "mixed" setting and append canonical SMILES strings of reagents to the end of aligned reactant SMILES strings. We train two RSMILES models for the FS and RS tasks, respectively, using the hyper-parameters provided in their GitHub repository. After training, we average the last 5 checkpoints to get the final checkpoint for each task. During inference, we augment each input SMILES strings for 5 times. We generate 10 output SMILES strings for each augmented input using beam search, resulting in a total of 50 SMILES strings for each test reaction. We get the final top 10 predictions for each task by aggregating these 50 predictions using their provided scripts. The performance of our re-trained RSMILES model on our dataset for the RS task is comparable with those reported in their 9https://github.com/Kohulan/Smiles-TO-iUpac-Translator paper on the USPTO-full dataset. Please note that the performance of our re-trained RSMILES for the FS task, as shown in Table 3, is lower than the reported results on the USPTO-MIT dataset for the FS task in their paper. This is due to that our dataset for the FS task is more challenging than the USPTO-MIT dataset used in the RSMILES's paper, due to the inclusion of stereochemical information.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a84f10f-76a8-40e1-b7b4-3b15f2f9f179
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.3.7. Molecular Transformer For Fs And Rs Similar to RSMILES, Molecular Transformer (Schwaller et al., 2019) is also a transformer model trained on pairs of SMILES strings, and translate from reactants and reagents into products or products into reactants. While the original Molecular Transformer only focused on the FS task, we train and test it on both the FS and RS tasks. We use canonical SMILES strings of molecules without data augmentation as the training data of Molecular Transformer. We train two Molecular Transformer models separately for the FS and RS tasks using the hyper-parameters provided in their GitHub repository. During inference, we generate 10 output SMILES strings for each canonical input SMILES string using beam search. The performance of our re-trained Molecular Transformer model on our dataset for the FS task is comparable with those reported in their paper on the USPTO-STEREO dataset.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bd77dbb6-e8f9-4a3d-b385-10d852d59e84
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.4. Evaluation Metrics We introduce the metrics used in Section 5.3 as follows: - **Exact match (EM)**. It measures the success of a model in providing responses that perfectly match the reference or ground truth answers. Notably, for each predicted result, we compare it with the gold answers of all the samples that have the same input as the sample. If there exists a match, it is counted as correct and contributes to the ratio of this metric. Note that for different types of outputs, we employ different criterion for judging it they match. For tasks where outputs are SMILES strings (NC-I2S, MG, FS, and RS), we parse SMILES strings into molecules, and they are matched only if the two molecules are identical. For tasks where outputs are molecular formula, two are matched if they represent the same set of atoms, and the corresponding numbers of the samples are identical. For tasks where outputs are IUPAC names (NC-S2I), since IUPAC names may contain multiple parts separated by semicolons, we compare the set composed of these parts. That is, we do not care about the orders of these parts and how many of parts are there in the generated string, but judge by the correctness of the unique parts. - **Fingerprint-based Tanimoto Similarity (FTS)**. It is an important metric type commonly used in cheminformatics. It measures the structural similarity between molecules. The one we report in Section 5.3 is one of this type, called Morgan FTS, which leverages Morgan method to calculate the fingerprint (Morgan, 1965). - **METEOR score**. It is a common metric used to measure the similarity between text. (Lavie & Agarwal, 2007) - **RMSE**. It is a common metric to measure the distance between predicted values and the gold values on regression tasks. Smaller is better. - **Acc**. It represents the ratio of correct predictions. - **Validity (Valid)**: it reports the ratio of valid predicted SMILES representations that can be successfully parsed into a molecule. It is calculated among all the generated outputs that contain extractable answer part. If an output refuses to answer a question, it would not be counted for calculating the validity. Except for this metric, all the other metrics for SMILES are calculated based on the valid samples. Additional metrics used in Appendix
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bdfa00c5-da02-4aad-be13-9038275d10be
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## A.4. Evaluation Metrics measure the distance between predicted values and the gold values on regression tasks. Smaller is better. - **Acc**. It represents the ratio of correct predictions. - **Validity (Valid)**: it reports the ratio of valid predicted SMILES representations that can be successfully parsed into a molecule. It is calculated among all the generated outputs that contain extractable answer part. If an output refuses to answer a question, it would not be counted for calculating the validity. Except for this metric, all the other metrics for SMILES are calculated based on the valid samples. Additional metrics used in Appendix B are briefly introduced as follows: - Top-k **Exact Match**: It is the same as EM discussed before, but on the top-k generated outputs. It gives a more comprehensive results. - **MACCS FTS and RDK FTS**: In addition to Morgan FTS we use in the previous sections, we introduce two extra FTS metrics, namely MACCS FTS and RDK FTS, that use MACCS (Durant et al., 2002) and RDK (Schneider et al., 2015) methods to calculate the fingerprint respectively. - **BLUE scores and ROUGE scores**: Another types of textual based metrics that measures the similarity between text. - **Matthew's Correlation Coefficient (MCC)**. Applied in the binary classification tasks (PP-BBBP, PP-Clintox, PP-HIV, and PP-SIDER), this metric provides a balanced measure of the quality of binary classifications (Matthews, 1975). - **F1 Score**: The harmonic mean of precision and recall; a commonly used metric for classification tasks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
24af7beb-e059-47a8-b74f-9429690d5805
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B. Overall Experimental Results B.1. Main Results In this section, we show the overall experimental results on additional metrics. Results for name conversion tasks are presented in Tables 4,5,6, and 7. Results for molecule description tasks are presented in Tables 8 and 9. Results for chemical reaction tasks can be found in Tables 10 and 11. Results for the property prediction tasks can be found in Tables 13 and 12.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a8364c3-34e7-48e0-8b32-3c4d89107ffa
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.1. Name Conversion For the NC-I2F task (Table 4), LlaSMolMistral is the best performing LLM, tying with many other methods on validity. All LlaSMol models outperform all other methods except the SoTA task-specific method. This showcases the benefit of fine-tuning on SMolInstruct. The SoTA task-specific method (STOUT+RDKit) outperforms LlaSMolMistral, achieving an exact match 97.9% of the time. The best performing LLM baseline for top-1 exact match is Galactica, and GPT-4 (1-shot) is the best performing LLM baseline for top-3. Neither of these methods come close to achieving the same performance as the LlaSMol series, except on Validity. Model EM Validity Top 1 Top 3 STOUT+RDKit 97.9 - 100.0 GPT-4 16.0 33.0 100.0 GPT-4 (zero-shot) 16.0 35.0 100.0 GPT-4 (0-shot) 21.8 35.6 100.0 GPT-4 (1-shot) 22.2 36.2 100.0 GPT-4 (3-shot) 20.6 34.6 100.0 GPT-4 (5-shot) 21.2 35.0 100.0 Galactica 24.7 28.1 100.0 Llama 2 0.9 1.4 99.8 Code Llama 0.4 1.1 99.4 Mistral 0.2 3.1 99.9 Molinst (instruction-tuned) 0.0 0.1 66.8 LlaSMolGalactica 81.0 91.2 100.0 LlaSMolLlama 2 74.6 86.6 100.0 LlaSMolCode Llama 79.2 89.1 100.0 LlaSMolMistral 89.8 94.6 100.0 For the NC-I2S task (Table 5), LlaSMol
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
297ac3aa-d277-495f-9813-dff2c7c0a1a2
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.1. Name Conversion .1 99.9 Molinst (instruction-tuned) 0.0 0.1 66.8 LlaSMolGalactica 81.0 91.2 100.0 LlaSMolLlama 2 74.6 86.6 100.0 LlaSMolCode Llama 79.2 89.1 100.0 LlaSMolMistral 89.8 94.6 100.0 For the NC-I2S task (Table 5), LlaSMolMistral is the best performing LLM for all metrics, tying with LlaSMolGalactica on validity. All LlaSMol models outperform all other methods except the SoTA task-specific method. This showcases the benefit of fine-tuning on SMolInstruct. The SoTA task-specific method (STOUT) only barely outperforms LlaSMolMistral, achieving an exact match 73.5% of the time and even better FTS and validity scores. The best performing LLM baseline is Galactica, but its performance is not remotely close to LlaSMol for any metric except validity. For the NC-S2F task (Table 6), LlaSMolMistral is the best performing LLM for all metrics, tying with many other methods on validity. All LlaSMol models outperform all other methods except the SoTA task-specific method, which directly computes the answer using an algorithm and is therefore 100% accurate. The best performing LLM baselines are GPT-4 and GPT-4 (0 shot), but their performance is not remotely close to LlaSMol for any metric except validity. LlaSMol's ability to translate SMILES representations to molecular formulas demonstrates an understanding of SMILES and chemical formulas. While this is not a challenging task, evaluating it can illustrate an LLM's understanding of representations. For the NC-S2I task (Table 7), LlaSMol is the best performing LLM for all metrics. Interestingly, the models not fine-tuned on SMolInstruct cannot exactly match the expected output even 1% of the time. The SoTA task-specific method achieves 56.5% accuracy, indicating that this task is not necessarily easy. LlaSMol's ability to translate SMILES representations Model Exact Match
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
50097994-6053-4949-b45f-3fe5a4d60d58
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.1. Name Conversion molecular formulas demonstrates an understanding of SMILES and chemical formulas. While this is not a challenging task, evaluating it can illustrate an LLM's understanding of representations. For the NC-S2I task (Table 7), LlaSMol is the best performing LLM for all metrics. Interestingly, the models not fine-tuned on SMolInstruct cannot exactly match the expected output even 1% of the time. The SoTA task-specific method achieves 56.5% accuracy, indicating that this task is not necessarily easy. LlaSMol's ability to translate SMILES representations Model Exact Match FTS Validity Top 1 Top 3 Top 5 MACCS RDK Morgan STOUT 73.5 - - 99.9 99.8 99.5 99.4 GPT-4 (0-shot) 3.6 4.3 5.2 77.2 51.2 49.2 84.2 GPT-4 (1-shot) 3.3 5.7 6.9 76.5 49.6 48.1 85.8 GPT-4 (3-shot) 3.6 5.9 6.9 76.5 48.8 46.9 84.4 GPT-4 (5-shot) 2.4 4.7 6.1 75.6 47.5 46.2 84.8 Galactica 9.7 11.1 12.5 81.5 58.1 53.4 95.6 Mistral 0.0 0.0 0.0 33.6 21.3 11.3 40.3 Llama 2 0.0 0.0 0.0 29.5 18.8 11.3 18.1 Code Llama 0.0 0.0 0.0 30.7 20.0 12.0 81.0 Molinst (instruction-tuned) 0.0 0.0 0.0 43.9 25.1 18.4 96.2 LlaSMolGalactica 57.7 69.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
75900ed5-efdd-4169-8841-1563e04a612d
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.1. Name Conversion 3 11.3 40.3 Llama 2 0.0 0.0 0.0 29.5 18.8 11.3 18.1 Code Llama 0.0 0.0 0.0 30.7 20.0 12.0 81.0 Molinst (instruction-tuned) 0.0 0.0 0.0 43.9 25.1 18.4 96.2 LlaSMolGalactica 57.7 69.2 72.3 95.5 86.4 84.7 99.6 LlaSMolLlama 2 41.8 52.7 56.5 91.1 76.8 75.4 99.1 LlaSMolCode Llama 49.9 60.1 63.8 93.1 80.9 80.0 99.3 LlaSMolMistral 70.1 77.8 80.1 96.6 90.1 89.1 99.6 Model EM Validity Top 1 Top 3 RDKit 100.0 - 100.0 GPT-4 18.0 28.0 100.0 GPT-4 (zero-shot) 16.0 27.0 100.0 GPT-4 (0-shot) 16.4 28.8 100.0 GPT-4 (1-shot) 16.0 26.0 100.0 GPT-4 (3-shot) 13.4 21.6 100.0 GPT-4 (5-shot) 12.8 23.2 100.0 Galactica 8.8 9.0 100.0 Llama 2 0.3 0.7 99.7 Code Llama 0.1 0.6 100.0 Mistral 0.5 1.2 100.0 Molinst (instruction-tuned) 0.0 0.0 64.7 LlaSMolGalactica 90.0 94.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8e0285f8-f11e-4946-b657-7d915f74d5ba
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.1. Name Conversion 100.0 GPT-4 (5-shot) 12.8 23.2 100.0 Galactica 8.8 9.0 100.0 Llama 2 0.3 0.7 99.7 Code Llama 0.1 0.6 100.0 Mistral 0.5 1.2 100.0 Molinst (instruction-tuned) 0.0 0.0 64.7 LlaSMolGalactica 90.0 94.8 100.0 LlaSMolLlama 2 86.7 93.9 100.0 LlaSMolCode Llama 91.5 96.0 100.0 LlaSMolMistral 94.5 97.2 100.0 to IUPAC names suggests a level of understanding of the functional groups in the IUPAC specification, as well as an understanding of SMILES representations. Model EM Top 1 Top 3 Top 5 STOUT 56.5 - - GPT-4 (0-shot) 0.0 0.0 0.0 GPT-4 (1-shot) 0.0 0.0 0.0 GPT-4 (3-shot) 0.2 0.2 0.2 GPT-4 (5-shot) 0.2 0.2 0.2 Galactica 0.0 0.0 0.0 Llama 2 0.0 0.0 0.0 Code Llama 0.0 0.0 0.0 Mistral 0.0 0.0 0.0 Molinst (instruction-tuned) 0.0 0.0 0.0 LlaSMolGalactica 18.2 30.6 35.3 LlaSMolLlama 2 10.3 18.7 21.7 LlaSMolCode Llama 15.5 26.2 30.5 LlaSMolMistral 29.0 45.3 50.5
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
68602e5d-576a-48c4-89e7-dcf6478a7d8b
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description For the MG task (Table 8), LlaSMolMistral is the best performing LLM on all metrics except validity, where LlaSMolLlama 2 outperforms it by a small margin. Its top-1 exact match score is especially impressive, more than tripling the top-1 performance of the best baseline LLM (Molinst). This indicates that the right combination of foundation model and fine-tuning dataset can support a better comprehension of natural language about molecules. Model Exact Match FTS Validity Top 1 Top 3 Top 5 MACCS RDK Morgan MolT5 31.6 38.7 41.3 87.8 80.1 73.2 95.3 GPT-4 (0-shot) 2.8 3.4 4.0 75.8 55.8 46.5 93.0 GPT-4 (1-shot) 4.9 6.2 7.9 74.0 52.9 42.8 81.8 GPT-4 (3-shot) 5.9 8.2 8.9 74.8 53.5 43.3 85.2 GPT-4 (5-shot) 4.0 5.9 7.1 73.6 52.8 43.1 85.2 Galactica 0.0 0.0 0.0 22.7 11.8 11.6 94.7 Llama 2 0.0 0.0 0.0 18.3 11.8 4.8 93.6 Code Llama 0.0 0.0 0.0 26.5 15.1 8.5 95.2 Mistral 0.0 0.1 0.1 32.2 18.4 9.0 35.9 Molinst (instruction-tuned) 6.0 11.6 13.4 69.5 53.5 43.6 84.8 LlaSMolGalactica 7.8 13.6 17.2 78.6 61.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8e96117b-f023-4ff9-aba0-de2f94ebc368
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description Code Llama 0.0 0.0 0.0 26.5 15.1 8.5 95.2 Mistral 0.0 0.1 0.1 32.2 18.4 9.0 35.9 Molinst (instruction-tuned) 6.0 11.6 13.4 69.5 53.5 43.6 84.8 LlaSMolGalactica 7.8 13.6 17.2 78.6 61.0 51.0 99.8 LlaSMolLlama 2 4.8 9.0 10.6 71.7 52.7 44.2 99.9 LlaSMolCode Llama 6.5 11.8 14.2 74.0 55.9 46.6 99.8 LlaSMolMistral 19.2 29.2 33.7 84.1 70.3 61.7 99.8 For the MC task (Table 9), LlaSMolMistral is the best performing LLM on all metrics. It is still outperformed by the SoTA task-specific model (MolT5), however, approaching close to it. Please note that these text-based metrics only measures the , and it does not necessary mean the correctness of the description on chemistry dimension. That is to say, these metrics are only a reference for how the description looks like the gold one, instead of a precise measure of correctness. Limited by the current resources, we cannot obtain the correctness measures, and will leave this to our future work. | Model | BLEU-2 | BLEU-4 | ROUGE-1 | ROUGE-2 | ROUGE-L | METEOR | |-----------------------------|----------|----------|-----------|-----------|-----------|----------| | MolT5 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0df6c220-119f-4257-80d1-e839f84fa311
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description | BLEU-2 | BLEU-4 | ROUGE-1 | ROUGE-2 | ROUGE-L | METEOR | |-----------------------------|----------|----------|-----------|-----------|-----------|----------| | MolT5 | | | | | | | | 0 | . | 461 | 0 | . | 366 | 0 | | GPT-4 (0-shot) | | | | | | | | 0 | . | 107 | 0 | . | 027 | 0 | | GPT-4 (1-shot) | | | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
71661dd9-39ad-478d-b770-0278f6ce0824
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description | 107 | 0 | . | 027 | 0 | | GPT-4 (1-shot) | | | | | | | | 0 | . | 166 | 0 | . | 061 | 0 | | GPT-4 (3-shot) | | | | | | | | 0 | . | 202 | 0 | . | 092 | 0 | | GPT-4 (5-shot) | | | | | | | | 0 | .
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
40a262e2-5540-49b3-80f4-1cdcbe4a7b8d
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description 0 | | GPT-4 (5-shot) | | | | | | | | 0 | . | 214 | 0 | . | 103 | 0 | | Llama 2 | | | | | | | | 0 | . | 110 | 0 | . | 047 | 0 | | Code Llama | | | | | | | | 0 | . | 106 | 0 | . | 052 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cdea3cd8-db17-4161-8545-b828d74cd0fe
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description | | | | | | 0 | . | 106 | 0 | . | 052 | 0 | | Mistral | | | | | | | | 0 | . | 146 | 0 | . | 068 | 0 | | Galactica | | | | | | | | 0 | . | 018 | 0 | . | 002 | 0 | | Molinst (instruction-tuned) | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
81d67281-c7a1-4e01-96ce-607719bf9ef5
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description | | | 0 | . | 018 | 0 | . | 002 | 0 | | Molinst (instruction-tuned) | | | | | | | | 0 | . | 028 | 0 | . | 020 | 0 | | LlaSMol | | | | | | | | Galactica | | | | | | | | 0 | . | 324 | 0 | . | 232 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
40a1883d-dc41-4953-b1e5-a144b572a467
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description | | | | | | 0 | . | 324 | 0 | . | 232 | 0 | | LlaSMol | | | | | | | | Llama 2 | | | | | | | | 0 | . | 292 | 0 | . | 203 | 0 | | LlaSMol | | | | | | | | Code Llama |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
48963570-1a12-4162-bdfd-4ed46310db44
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description 203 | 0 | | LlaSMol | | | | | | | | Code Llama | | | | | | | | 0 | . | 322 | 0 | . | 226 | 0 | | LlaSMol | | | | | | | | Mistral | | | | | | | | 0 | . | 414 | 0 | .
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
08928f95-7f2f-46f6-b4d7-61bea01ef486
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.2. Molecule Description | | | | | | | | 0 | . | 414 | 0 | . | 319 | 0 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0e76b719-4d99-495b-ac46-e96719015447
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.3. Chemical Reaction For the FS task (Table 10), LlaSMolMistral is the best performing LLM across all metrics, although it ties with many methods on validity. Notably, all of the LlaSMol models perform much better than the other LLMs, which indicates the power of fine-tuning on SMolInstruct for understanding chemical reactions. The SoTA task-specific methods still outperform all of the LLMs, but the LlaSMol series is much closer than the other LLMs. We observe a similar trend for the RS task (Table 11). Again, LlaSMolMistral is the best performing LLM across all metrics, although it does tie with LlaSMolLlama 2 for validity. We observe the LLMs without instruction tuning fail to achieve any accuracy greater than 2% on this task. This indicates that instruction tuning can be useful for LLMs to learn retrosynthesis. The SoTA task-specific methods still outperform all of the LLMs, which indicates that there is still room for improvement for LLMs on RS.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
684bb3cf-983f-4d16-80c4-3f916885794d
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction For the two regression tasks in molecular property prediction (Table 12), LlaSMolMistral achieves the highest performance among LLMs in both tasks. Notably, LlaSMolMistral outperforms the best LLMs GPT-4 with a large margin, while all the other LlaSMol models also outperform it on Lipo task. This indicates the power of fine-tuning on SMolInstruct for understanding properties of molecules. However, the task-specific method (Uni-Mol) still outperforms all of the LLMs. For the four classification tasks in molecular property prediction (Table 13), LlaSMolMistral achieves the highest accuracy values among LLMs on all the tasks. Particularly, on PP-SIDER task, LlaSMolMistral outperforms all the LLMs and the task-specific model Uni-Mol on all the three metrics, which highlights the potential of LLMs in understanding molecules and predicting their properties. However, LlaSMolMistral achieves very low F1 values on PP-ClinTox and PP-HIV. This suggests that LlaSMolMistral can struggle with data imbalance issue in these two tasks, and achieve high accuracy values by predicting most samples as negative. Similarly, most other LLMs also achieve either poor accuracy values (e.g., 6.3% and 4.4% for Molinst on PP-ClinTox and PP-HIV) or poor F1 values (e.g., 0.0% and 0.0% for Galactica). Therefore, there is still room for further improvement of the robustness of LLMs when dealing with imbalanced datasets. Model Exact Match FTS Validity Top 1 Top 3 Top 5 MACCS RDK Morgan Molecular Transformer 78.5 85.5 87.1 95.5 93.0 91.4 99.5 RSMILES 78.7 87.9 89.7 95.7 93.7 92.2 100.0 GPT-4 (0-shot) 2.0 2.4 2.6 60.7 49.9 41.1 88.2 GPT-4 (1-shot) 1.1 2.2 2.6 61.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d78711e4-f156-4f90-aba4-876a820a13f3
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction Transformer 78.5 85.5 87.1 95.5 93.0 91.4 99.5 RSMILES 78.7 87.9 89.7 95.7 93.7 92.2 100.0 GPT-4 (0-shot) 2.0 2.4 2.6 60.7 49.9 41.1 88.2 GPT-4 (1-shot) 1.1 2.2 2.6 61.4 49.6 41.2 90.8 GPT-4 (3-shot) 0.2 2.2 2.6 62.2 51.5 42.9 91.8 GPT-4 (5-shot) 1.3 2.0 3.0 63.4 52.2 44.3 93.6 Galactica 0.0 0.0 0.0 40.2 33.2 25.8 84.2 Llama 2 0.0 0.0 0.0 33.5 24.4 13.7 97.7 Code Llama 0.0 0.0 0.0 35.4 26.5 15.8 99.6 Mistral 0.0 0.0 0.0 39.0 31.0 19.8 96.0 Molinst (instruction-tuned) 2.1 3.3 3.7 51.1 36.7 31.6 99.8 LlaSMolGalactica 52.7 53.9 70.6 88.6 82.3 79.7 99.8 LlaSMolLlama 2 44.2 58.4 62.5 85.8 78.4 75.3 99.7 LlaSMolCode Llama 52.3 65.6 69.3 88.4 82.1 79.4 99.8 LlaSMolMistral 63.5 75.6 79.1 91
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
84e8d445-0eec-4649-b468-da9205bdc4e8
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction laSMolGalactica 52.7 53.9 70.6 88.6 82.3 79.7 99.8 LlaSMolLlama 2 44.2 58.4 62.5 85.8 78.4 75.3 99.7 LlaSMolCode Llama 52.3 65.6 69.3 88.4 82.1 79.4 99.8 LlaSMolMistral 63.5 75.6 79.1 91.8 87.2 85.0 99.8 Model Exact Match FTS Validity Top 1 Top 3 Top 5 MACCS RDK Morgan Molecular Transformer 47.0 61.7 66.4 87.0 81.5 77.5 99.7 RSMILES 46.2 63.9 69.9 86.5 80.9 76.7 100.0 GPT-4 (0-shot) 0.0 0.2 0.4 60.9 36.8 35.2 48.8 GPT-4 (1-shot) 0.3 0.8 1.4 66.6 42.5 40.9 79.6 GPT-4 (3-shot) 0.5 1.2 1.6 68.2 45.6 42.2 87.8 GPT-4 (5-shot) 0.2 0.8 1.2 68.3 46.0 43.1 84.4 Galactica 0.0 0.0 0.0 48.9 38.3 34.6 93.0 Llama 2 0.0 0.0 0.0 46.5 35.0 27.5 87.7 Code Llama 0.0 0.1 0.0 44.7 32.1 25.3 97.1 Mistral 0.0 0.0 0.0 44.6 32.0 24
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
99f94028-d4f7-4fe3-ba23-90f409d8c888
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction .1 84.4 Galactica 0.0 0.0 0.0 48.9 38.3 34.6 93.0 Llama 2 0.0 0.0 0.0 46.5 35.0 27.5 87.7 Code Llama 0.0 0.1 0.0 44.7 32.1 25.3 97.1 Mistral 0.0 0.0 0.0 44.6 32.0 24.2 98.0 Molinst (instruction-tuned) 5.7 8.3 9.5 69.6 53.7 48.0 97.8 LlaSMolGalactica 25.3 39.4 45.4 80.9 71.9 67.0 99.8 LlaSMolLlama 2 22.4 35.5 41.1 79.8 70.4 65.2 99.9 LlaSMolCode Llama 25.7 40.0 45.7 80.7 71.7 66.7 100.0 LlaSMolMistral 32.9 49.6 55.4 83.0 75.1 70.4 100.0 | Model | ESOL | |-----------------------------|--------| | ↓ | | | Lipo | | | ↓ | | | Uni-Mol | | | 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b2ab7986-9e28-4e59-a51d-d66ceb6cb888
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction | | | Lipo | | | ↓ | | | Uni-Mol | | | 0 | . | | GPT-4 (0-shot) | | | 2 | . | | GPT-4 (1-shot) | | | 2 | . | | GPT-4 (3-shot) | | | 2 | . | | GPT-4 (5-shot) | | | 1 | . | | Galactica | | | 4 | . | | Llama 2
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
af37bf8a-38ac-4805-9ced-f75f8af85a88
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction | | | 1 | . | | Galactica | | | 4 | . | | Llama 2 | | | 3 | . | | Code Llama | | | 3 | . | | Mistral | | | 3 | . | | Molinst (instruction-tuned) | | | 4 | . | | LlaSMol | | | Galactica | | | 2 | . | | LlaSMol
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ef4f6e60-55a4-4521-a6e5-190d4ab42865
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction | | LlaSMol | | | Galactica | | | 2 | . | | LlaSMol | | | Llama 2 | | | 2 | . | | LlaSMol | | | Code Llama | | | 2 | . | | LlaSMol | | | Mistral | | | 1 | . | Model PP-BBBP PP-ClinTox PP-HIV PP-SIDER F1 MCC Acc F1 MCC Acc F1 MCC Acc F1 MCC Acc Uni-Mol 89.5 0.651 85.3 42.1 0.381 92.4 32.8
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8e15a60a-2fc9-472d-8e80-b026641ef15a
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction | | 1 | . | Model PP-BBBP PP-ClinTox PP-HIV PP-SIDER F1 MCC Acc F1 MCC Acc F1 MCC Acc F1 MCC Acc Uni-Mol 89.5 0.651 85.3 42.1 0.381 92.4 32.8 0.361 97.0 76.5 0.366 70.0 GPT-4 (0-shot) 69.8 0.300 64.0 16.5 0.125 36.8 9.2 0.056 56.6 59.9 0.169 56.6 GPT-4 (1-shot) 70.5 0.378 66.0 15.7 0.131 25.7 9.6 0.087 39.4 45.0 −0.084 43.2 GPT-4 (3-shot) 63.5 0.344 60.9 17.9 0.175 36.1 8.5 0.040 48.0 40.2 −0.143 39.8 GPT-4 (5-shot) 59.1 0.319 57.9 11.1 −0.047 33.3 9.1 0.052 55.8 55.4 0.022 50.4 Galactica 81.7 0.000 69.0 0.0 −0.023 92.4 0.0 0.000 96.7 70.9 0.364 68.1 Llama 2 72.0 −0.043 58.9 15.2 0.070 45.1 1.4 −0.020 93.3 71.2 0.177 61.9 Code Llama 70.1 0.043 58.9 0.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d30ab34c-be34-4029-ad87-b6f542813953
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.1.4. Property Prediction .4 Galactica 81.7 0.000 69.0 0.0 −0.023 92.4 0.0 0.000 96.7 70.9 0.364 68.1 Llama 2 72.0 −0.043 58.9 15.2 0.070 45.1 1.4 −0.020 93.3 71.2 0.177 61.9 Code Llama 70.1 0.043 58.9 0.0 −0.079 85.4 4.5 0.005 91.8 69.9 0.140 60.2 Mistral 33.9 0.046 40.6 12.9 −0.003 15.3 6.2 −0.016 7.1 36.0 −0.202 38.1 Molinst (instruction-tuned) 86.0 0.000 60.9 15.9 0.000 6.3 6.0 −0.001 4.4 75.3 0.043 52.4 LlaSMolGalactica 81.7 0.000 69.0 0.0 0.000 93.1 0.0 0.000 96.7 79.9 0.418 67.9 LlaSMolLlama 2 81.3 −0.048 68.5 0.0 0.000 93.1 2.9 0.120 96.7 78.2 0.360 65.7 LlaSMolCode Llama 81.6 0.042 69.0 0.0 0.000 93.1 0.0 0.000 96.7 79.1 0.409 69.9 LlaSMolMistral 83.7 0.340 74.6 0.0 0.000 93.1 4.3 0.111 96.7 79.9 0.429 70.7
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
21d54f77-2e24-47da-bf1c-ae0af027e74f
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## B.2. Influence Of Lora Modules And Trainable Parameters Figure 4 presents the model performance on all the 14 tasks under different LoRA settings and base models, as described in Section 5.4. As shown in Figure 4, one of the key observations is that incorporating more LoRA modules during training can consistently enhance the performance of LlaSMol on most tasks. In addition, by comparing LlaSMol and LlaSMol Large, another key observation we can tell is that LlaSMol Large with larger base model consistently outperforms all the LlaSMol models under different LoRA settings on almost all the tasks, except four classification tasks for molecular property prediction. These observations are consistent with those described in Section 5.4.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
65adf465-14ac-4550-98f8-3f7d7f1a5413
# Llasmol: Advancing Large Language Models For Chemistry With A Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset ## C. More Statistics Of Smolinstruct To know more about the proposed SMolInstruct dataset, we do a statistics on the molecules (represented in SMILES). Altogether, it contains 1.6M distinct molecules, and several important statistical values are shown in results are shown Figure 5. Specifically, **Bertz complexity** is a topological index that measures the complexity of molecules based on the number and types of bonds and atoms. **Atom count** shows the number of atoms in a molecule, it represents the size of a molecule. **Molecular weight** is a measure of the sum of the atomic weights of the atoms in a molecule. And ring count shows the number of rings in the molecular structures. As we can see, the values varies much, showing a extensive coverage in terms of complexity, size, and structure. Notably, when compared to Mol-Instructions (Fang et al., 2023), molecules in SMolInstruct show a larger complexity and diversity, which indicates that tasks of SMolInstruct can be more comprehensive and complicated than those of Mol-Instructions. The scale, diversity, and careful construction of SMolInstruct makes it well-suited for learning chemistry LLMs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09391v1.md", "file_path": "paper_data/2402.09391v1.md", "file_size": 91859, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
328ea03d-81c6-4344-ad1c-a0bcd14e35f7
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning Changshu Liu Shizhuo Dylan Zhang Reyhaneh Jabbarvand Department of Computer Science University of Illinois at Urbana-Champaign, Illinois, US
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
df93e542-0c55-4d00-a6a0-a771aeac7e55
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## Abstract Solely relying on test passing to evaluate Large Language Models (LLMs) for code synthesis may result in unfair assessment or promoting models with data leakage. As an alternative, we introduce CodeMind, a framework designed to gauge the code reasoning abilities of LLMs. CodeMind currently supports three code reasoning tasks: Independent Execution Reasoning (IER), Dependent Execution Reasoning (DER), and Specification Reasoning (SR). The first two evaluate models to predict the execution output of an arbitrary code or code the model could correctly synthesize. The third one evaluates the extent to which LLMs implement the specified expected behavior. Our extensive evaluation of nine LLMs across five benchmarks in two different programming languages using CodeMind shows that LLMs fairly understand control flow constructs and, in general, are capable of reasoning how inputs evolve to output, specifically for simple programs and the ones they can correctly synthesize. However, their performance drops for code with higher complexity, non-trivial logical and arithmetic operators, non-primitive types, and API calls. Furthermore, we observe that, while correlated, specification reasoning (essential for code synthesis) does not imply execution reasoning (essential for broader programming tasks such as testing and debugging): ranking LLMs based on test passing can be different compared to code reasoning.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
40dace5a-a44a-464a-b6fd-3bbdb55f3044
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 1. Introduction Large Language Models (LLMs) have shown exceptional programming abilities, specifically when instruction-tuned or prompted through Chain- or Tree-of-Thoughts (CoT (Wei et al., 2022b) or ToT (Yao et al., 2023)) and in-context learning (Wei et al., 2022a; Garg et al., 2022). However, several studies suggest that LLMs struggle to generalize this exceptional ability, specifically when the dataset becomes more complex (Du et al., 2023; Jimenez et al., 2023), or the task requires understanding code, rather than natural language (Pan et al., 2023; Min et al., 2023). This is mainly because LLMs are trained to associate code synthesis with natural language specifications, i.e., reason how to combine code constructs similar to examples they have seen while satisfying requirements explained in the specification. To illustrate how code reasoning tasks can evaluate LLMs, Figure 1-a shows a code synthesized by GPT-3.5 given natural language specification. The code constructs corresponding to the specification are highlighted with matching colors. Due to the ambiguity in the natural language, this code returns the smallest number in the list rather than the number at the index equal to the value of the smallest number. As a result, for a given input [2, 5, 4, 3], the code returns 2 instead of 4, and the assertion fails. One way to assess the inductive code reasoning of LLMs is to include specific expected program behavior and check whether the generated code can reproduce that behavior. This entails a level of code reasoning, which we refer to as Specification Reasoning (SR). Figure 1-b shows the new specification and the corresponding generated code. Executing the code given the specified input-output pair results in a test pass, indicating the ability of GPT-3.5 to understand the given specification and generate a *correct* code. Including test data in prompts has been a known practice to improve the performance of models in programming tasks (Chen et al., 2022; Zhong et al., 2022; Shi et al., 2022; Zhang et al., 2023). However, it is a weak proxy for code reasoning as it still involves the association of code and natural language. A deeper level of code reasoning is reasoning | Code Synthesis
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b0600c76-fc53-4f16-abc7-a46f6c33c31a
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 1. Introduction 3.5 to understand the given specification and generate a *correct* code. Including test data in prompts has been a known practice to improve the performance of models in programming tasks (Chen et al., 2022; Zhong et al., 2022; Shi et al., 2022; Zhang et al., 2023). However, it is a weak proxy for code reasoning as it still involves the association of code and natural language. A deeper level of code reasoning is reasoning | Code Synthesis | Specification Reasoning | |-------------------------------------------------------------------|------------------------------------------------------| | Prompt 1: | write a python program that given a list of numbers, | | return the value of number at the index specified by the value of | | | smallest number in the list. | | | Prompt 2: | write a python program that given a list of numbers,
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b64df57c-0e90-4cdf-b3f5-17d4eee03d13
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 1. Introduction | | | Prompt 2: | write a python program that given a list of numbers, | | return the value of number at the index specified by the value of | | | smallest number in the list. | input=[2,5,4,3] returns 4 | by the code below? finds the index of the minimum value in the list. retrieves the value at the index of minimum value in the list. about execution output given an input, which we call Execution Reasoning (ER). This task challenges LLMs more, requiring them to reason about code without any natural language cross reference. Figure 1-c shows the CoT reasoning of GPT-3.5 in response to the ER task. Even though the model could generate a code that produced the expected output (and is correct if validated through testing), it cannot correctly reason about the code execution given the same inputs to predict the output. To automate code reasoning assessment, we propose Code- Mind. CodeMind currently offers *three* inductive code reasoning tasks: Independent Execution Reasoning (IER) and **Dependent Execution Reasoning (DER)** assess if LLMs can reason about how given inputs evolve to output for any arbitrary code (IER) or only the code that it correctly synthesized. **Specification Reasoning (SR)** evaluates the extent to which LL
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ab264877-0e25-486d-b25a-966add55a6fc
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 1. Introduction could generate a code that produced the expected output (and is correct if validated through testing), it cannot correctly reason about the code execution given the same inputs to predict the output. To automate code reasoning assessment, we propose Code- Mind. CodeMind currently offers *three* inductive code reasoning tasks: Independent Execution Reasoning (IER) and **Dependent Execution Reasoning (DER)** assess if LLMs can reason about how given inputs evolve to output for any arbitrary code (IER) or only the code that it correctly synthesized. **Specification Reasoning (SR)** evaluates the extent to which LLMs can reason and implement the specified behavior. Using CodeMind, we performed a large-scale ground theory study to assess LLMs for code reasoning. We selected nine models, including both general-purpose and Code LLMs and prompted them for IER, DER, and SR tasks on 5395 programs written in Java and Python1. We observe that: (1) LLMs have a good grasp of code constructs, likely due 1These programs are from *five* programming benchmarks, namely HumanEval (Chen et al., 2021), MBPP (Odena et al., 2021), CRUXEval (Gu et al., 2024) CodeNet (Puri et al., 2021), and Avatar (Ahmad et al., 2021). to alignment with concepts in the natural language specification. The instruction-tuned models can explain the code statement by statements and follow the execution of the programs in general. LLM code reasoning abilities, however, are limited to simple programs. Furthermore, models such as GPT-3.5 and MagicCoder (Wei et al., 2023), although they correctly explain what the code does, may fail to keep track of data flow and correctly reason about execution output. Open-source LLMs that have achieved comparable effectiveness as GPT models in code synthesis (Wei et al., 2023; Roziere et al., 2023; Luo et al., 2023) are behind them with a *huge gap* concerning code reasoning (§5). (2) LLMs can reason about test data in the specification, even if deceptive, and bring that into the reasoning process for code synthesis (§7). However, their reasoning is bottlenecked by their inherent limitation. They achieve a higher performance reasoning about the code they can correctly synthesize (§6). (3) On a
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b1123055-187e-4952-8cba-c002ede66c1e
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 1. Introduction source LLMs that have achieved comparable effectiveness as GPT models in code synthesis (Wei et al., 2023; Roziere et al., 2023; Luo et al., 2023) are behind them with a *huge gap* concerning code reasoning (§5). (2) LLMs can reason about test data in the specification, even if deceptive, and bring that into the reasoning process for code synthesis (§7). However, their reasoning is bottlenecked by their inherent limitation. They achieve a higher performance reasoning about the code they can correctly synthesize (§6). (3) On a dataset with complex programs, there is a negligible to no correlation between the ranking of models based on code synthesis—generating a code that passes all tests— and code reasoning performance (§6). This necessitates CodeMind tasks and metrics to complement the evaluation of LLMs for code. (4) Nested code constructs, complex conditional predicates and loop conditions, non-trivial arithmetic and logic operators, and API invocations can significantly challenge LLMs for code reasoning (§8). Our contributions are: (1) CodeMind framework for code reasoning that formally defines three inductive code reasoning tasks. CodeMind is open-source (CodeMind, 2024) and accepts contributions from researchers to integrate more code reasoning tasks into it. (2) A large-scale ground-theory evaluation of LLMs for code reasoning using CodeMind. (3) A comprehensive, in-depth analysis of results that offers a catalog of root causes negatively impacting the abilities of LLMs for code reasoning. This catalog would be a valuable guideline for developing better benchmarks that truly evaluate the programming abilities of LLMs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5a326849-5edd-4cdc-9aec-d25d853e7d11
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 2. Related Work A large body of work has assessed LLMs for reasoning tasks of different modalieties (Deshpande et al., 2021; Wu et al., 2023; Miceli-Barone et al., 2023; Bubeck et al., 2023; Wang et al., 2023; Imani et al., 2023; Luo et al., 2023; Huang et al., 2023; Valmeekam et al., 2022; Min et al., 2023), including natural language, visual data, math, logic, and code. CodeMind is more closely related to the very recent studies focusing on code reasoning (La Malfa et al., 2024; Gu et al., 2024; Zhang et al., 2024). CRUXEval benchmark is a concurrent work investigating the problem of code reasoning abilities of LLMs using a dataset of simple programs generated by CodeLlama (34B) with test cases (Gu et al., 2024). They evaluated a series of LLMs on CRUXEval for input and output prediction tasks. Compared to CRUXEval, CodeMind proposes more inductive code reasoning tasks, includes more programs with a variety of levels of complexity, and controls between code synthesis and reasoning tasks by evaluating LLMs using the same program. CodeMind is also equipped with a static analysis pipeline to enable in-depth examination and drawing informed conclusions. La Malfa et al. (2024) evaluate LMs to predict variable values at each code statement. Our experiments are larger compared to them: more programs with a diverse distribution of complexity and different programming languages, and more studied LLMs. We also offer more code reasoning tasks and present a cross-analysis of code synthesis and reasoning abilities. Zhang et al. (2024) investigate transformers' ability to learn or infer the recursive patterns from input and output pairs. They conclude that due to the inherent limitations of transformers, they may fail to learn recursion and instead find shortcut algorithms to reason about how outputs are related to inputs. Compared to this work, we evaluate LLMs regardless of architecture and training data but from the program perspective. We show LLMs can understand recursion but usually lose track of data flow due to the inability to correctly reason about loop conditions.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e539fa44-9325-43bd-bbd4-ec9ea4ca4021
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 3. Codemind Program specification defines a function S : SI → SO, where SI is a set of all possible inputs to the program and SO is a set of corresponding outputs. A code synthesized based on the implementation is usually a (partial2) function C : CI → CO. We define a program to be correct with respect to specification, if it satisfies all the following conditions: CI ⊆ SI, CO ⊆ SO, ∀i ∈ CI, C(i) = S(i) This entails the models to reason about how inputs evolve to a given output through implementation (execution reasoning) and implements a code such that it generates correct outputs for a given input (specification reasoning).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9d8911f1-d2ea-4a18-84bd-a17ec29edd12
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 3.1. Execution Reasoning Considering the aforementioned formalization, we define two execution reasoning tasks as follows. Definition 1: Independent Execution Reasoning (IER). Given a program C : CI → CO and set of inputs ˆI = {i|i ∈ CI}, LLM L can correctly reason about code execution if ˆo = C(ˆI), where ˆo = L(ˆI) is the predicted output by L. Note that in this task, we do not deal with specification, so we can assess LLMs for any arbitrary code that we have ground-truth pairs of ⟨ˆI, ˆo⟩. IER evaluates LLMs for any arbitrary code for general inductive code reasoning, which requires understanding code constructs, arithmetic and logic operations, and control flow. However, even for human developers, reasoning about their developed code is easier than any arbitrary code. Furthermore, as a self-consistency (Min et al., 2023) measurement, LLMs should be able to reason about the code they can correctly synthesize. This demands to have the following execution reasoning task. Definition 2: Dependent Execution Reasoning (DER). Given a specification S : SI → SO, a program C : CI → CO generated by LLM L, and set of inputs ˆI = {i|i ∈ CI, C(i) = S(i)}, LLM L can correctly reason about code execution if ˆo = C(ˆI), where ˆo = L(ˆI) is the predicted output by L. The assumption here is that when LLM L generates code C that passes the test ⟨ˆI, ˆo⟩, it be able to predict ˆo correctly.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0595df4a-b80b-44e6-99ac-0e7a4204c28c
# Codemind: A Framework To Challenge Large Language Models For Code Reasoning ## 3.2. Specification Reasoning In addition to inductive execution reasoning, a model should understand specification to synthesize a correct code. We formally define the specification reasoning task as follows. Definition 3: Specification Reasoning (SR). Given a specification S : SI → SO, an arbitrary ⟨*i, o*⟩ specified in the prompt along with the natural language specification, where i ∈ SI, o ∈ SO, S(i) = o, and program C : CI → CO generated by LLM L, the LLM can correctly reason about specification if C(i) = S(i). In other words, LLM L should be able to pass a test with ⟨*i, o*⟩, when they are explicitly specified in the prompt.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09664v2.md", "file_path": "paper_data/2402.09664v2.md", "file_size": 54178, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }