doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
67fd2bf0-b6a5-4e5e-8ca7-2dc0059b69bf
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 4.2 Multilingual Evaluation 6 74.71 ±0.3 74.00 ±2.2 65.92 ±0.6 72.79 ±1.6 72.91 ±1.7 64.73 ±2.9 57.71 ±0.3 50.82 ±0.7 72.66 ±1.0 53.79 ±0.2 66.0 tion, these tools have shown remarkable improvement in recent years (Neves et al., 2023), enabling cost-effective multilingual evaluation. The methodology for multilingual evaluation and the prompt template are the same as those used in the 3-shot scenario for English. The only differences lie in the translation of the questions, options, and context, while the examples used for few-shot learning remain unchanged.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
508724a1-edfb-4a4d-9ce5-33c7b78b8d81
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 4.3 Instruction Prompting All of our instructions adhere to the guidelines outlined for GPT-4's medical evaluation, as detailed in Nori et al. (2023a). Each task is presented as an MCQA, with answer options associated with letters (A to D or A to E). For a comprehensive list of the instruction prompts, please refer to Appendix F. During inference, the model predicts the next token based on the input prompt, generating probabilities for each token in the vocabulary. To ensure relevance, the vocabulary is filtered to include only tokens (here, choice letters) corresponding to the expected answer options. This approach prevents the model from generating irrelevant tokens or hallucinations (Liang et al., 2023; Beeching et al., 2023; Chen et al., 2023).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1c256327-2a39-42ed-ac9c-060c2c86d1f1
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 4.4 Supervised Fine-Tuning (Sft) Supervised Fine-Tuning (SFT) is a crucial step involving fine-tuning the model on annotated data to adapt it to specific tasks. To optimize BioMistral's performance beyond what is achievable with fewshot learning, we conducted SFT on both BioMistral 7B models and the baseline open-source models, using the training sets specified in Table 1. However, traditional SFT methods can be resourceintensive. To address this challenge, we adopted the QLoRa fine-tuning method (Dettmers et al., 2023) and an 8-bit quantization technique (Dettmers et al., 2022) as more cost-effective alternatives. Additionally, we implemented the improved batching method discussed in Section 3.2 to reduce finetuning time. For detailed hyperparameters used during SFT, please refer to Appendix A.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3fdb6734-d18e-4180-9ef2-46796e398a10
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5 Results And Discussions In this section, we report, analyze, and discuss the performance of BioMistral 7B models across various dimensions. We begin by examining its performance in a few-shot learning scenario (Section 5.1), followed by an evaluation of the finetuning performances (Section 5.2) of BioMistral 7B compared to several baseline models. The effectiveness of BioMistral 7B model merging strategies is then reported (Section 5.3) before exploring its generalization capabilities across several languages (Section 5.4). Additionally, we analyze the performance of BioMistral quantized versions in a few-shot scenario (Section 5.5). Finally, we delve into its reliability by examining its calibration (Section 5.6) and truthfulness (Section 5.7).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
324c4532-1e7f-4619-ab42-bd53e560616d
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.1 Few-Shot Learning The few-shot learning evaluation involved applying 3-shot in-context learning based on 3 different sets of randomly selected samples from each dataset's training set. We limited our samples to 3 due to the model's 2,048-token context window size. None of the models were fine-tuned on the datasets. In Table 2, we observe that BioMistral 7B outperforms Mistral 7B Instruct on 8 of the 10 tasks, demonstrating the effectiveness of domain adaptation (Chen et al., 2023; Lee et al., 2019). Additionally, BioMistral 7B surpasses all other opensource biomedical baselines on all tasks in this 3-shot scenario. The observed performances may vary depending on the dataset. For example, on MedQA 4 and 5 options, BioMistral 7B shows a MMLU Clinical KG Medical Genetics Anatomy Pro Medicine College Biology College Medicine MedQA MedQA 5 opts PubMedQA MedMCQA Avg. BioMistral 7B 59.9 ±1.2 64.0 ±1.6 56.5 ±1.8 60.4 ±0.5 59.0 ±1.5 54.7 ±1.0 50.6 ±0.3 42.8 ±0.3 77.5 ±0.1 48.1 ±0.2 57.3 Mistral 7B Instruct 62.9 ±0.2 57.0 ±0.8 55.6 ±1.0 59.4 ±0.6 62.5 ±1.0 57.2 ±2.1 42.0 ±0.2 40.9 ±0.4 75.7 ±0.4 46.1 ±0.1 55.9 BioMistral 7B Ensemble 62.8 ±0.5 62.7 ±0.5 57.5 ±0.3 63.5 ±0.8 64.3 ±1.6 55.7 ±1.5 50.6 ±0.3 43.6 ±0.5 77.5 ±0.2 48.8 ±0.0 58.7 BioMistral 7B DARE 62.3 ±1.3 67.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b364b698-beb7-43bd-920b-5275aa8b9e79
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.1 Few-Shot Learning 7 ±0.4 46.1 ±0.1 55.9 BioMistral 7B Ensemble 62.8 ±0.5 62.7 ±0.5 57.5 ±0.3 63.5 ±0.8 64.3 ±1.6 55.7 ±1.5 50.6 ±0.3 43.6 ±0.5 77.5 ±0.2 48.8 ±0.0 58.7 BioMistral 7B DARE 62.3 ±1.3 67.0 ±1.6 55.8 ±0.9 61.4 ±0.3 66.9 ±2.3 58.0 ±0.5 51.1 ±0.3 45.2 ±0.3 77.7 ±0.1 48.7 ±0.1 59.4 BioMistral 7B TIES 60.1 ±0.9 65.0 ±2.4 58.5 ±1.0 60.5 ±1.1 60.4 ±1.5 56.5 ±1.9 49.5 ±0.1 43.2 ±0.1 77.5 ±0.2 48.1 ±0.1 57.9 BioMistral 7B SLERP 62.5 ±0.6 64.7 ±1.7 55.8 ±0.3 62.7 ±0.3 64.8 ±0.9 56.3 ±1.0 50.8 ±0.6 44.3 ±0.4 77.8 ±0.0 48.6 ±0.1 58.8 MedAlpaca 7B 53.1 ±0.9 58.0 ±2.2 54.1 ±1.6 58.8 ±0.3 58.1 ±1.3 48.6 ±0.5 40.1 ±0.4 33.7 ±0.7 73.6 ±0.3 37.0 ±0.3 51.5 PMC-LLaMA 7B 24.5 ±1.7 27.7 ±1.7 35.3 ±0.7 17
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3ed31c64-fc9b-41e2-8c0b-27d483654c42
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.1 Few-Shot Learning .8 MedAlpaca 7B 53.1 ±0.9 58.0 ±2.2 54.1 ±1.6 58.8 ±0.3 58.1 ±1.3 48.6 ±0.5 40.1 ±0.4 33.7 ±0.7 73.6 ±0.3 37.0 ±0.3 51.5 PMC-LLaMA 7B 24.5 ±1.7 27.7 ±1.7 35.3 ±0.7 17.4 ±1.7 30.3 ±0.9 23.3 ±1.7 25.5 ±0.9 20.2 ±0.1 72.9 ±1.2 26.6 ±0.1 30.4 MediTron-7B 41.6 ±1.2 50.3 ±2.1 46.4 ±0.9 27.9 ±0.3 44.4 ±2.6 30.8 ±0.7 41.6 ±0.5 28.1 ±0.5 74.9 ±0.1 41.3 ±0.2 42.7 BioMedGPT-LM-7B 51.4 ±0.4 52.0 ±1.4 49.4 ±2.7 53.3 ±0.6 50.7 ±0.0 49.1 ±0.8 42.5 ±0.3 33.9 ±0.5 76.8 ±0.3 37.6 ±0.4 49.7 GPT-3.5 Turbo 1106* 74.71 ±0.3 74.00 ±2.2 65.92 ±0.6 72.79 ±1.6 72.91 ±1.7 64.73 ±2.9 57.71 ±0.3 50.82 ±0.7 72.66 ±1.0 53.79 ±0.2 66.0 9.6% and 11.1% increase over MediTron-7B and a 9.0% and 7.0% increase over MedAlpaca 7B, respectively. On M
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a6b58e65-9466-4e2c-8de7-53bb5f086992
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.1 Few-Shot Learning 74.71 ±0.3 74.00 ±2.2 65.92 ±0.6 72.79 ±1.6 72.91 ±1.7 64.73 ±2.9 57.71 ±0.3 50.82 ±0.7 72.66 ±1.0 53.79 ±0.2 66.0 9.6% and 11.1% increase over MediTron-7B and a 9.0% and 7.0% increase over MedAlpaca 7B, respectively. On MMLU, BioMistral 7B improves performance over previous biomedical LLMs at the 7B scale, with an overall average gain of 6.45% over MedAlpaca 7B, 18.05% over MediTron-7B, and 31.12% over PMC-LLaMA 7B. Similarly, on MedMCQA, BioMistral 7B shows a 10.3% increase over MediTron-7B, 12.7% over MedAlpaca 7B, and 20.4% over PMC-LLaMA 7B. However, in the PubMedQA evaluation, BioMistral's performance experienced a decline, showing at least a 15.7% lower accuracy compared to other models, likely due to hallucinations caused by imbalanced classes. Overall, GPT-3.5 Turbo remains the best model in this 3-shot scenario.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2d1c442a-967d-4151-9e64-58e88edb1367
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.2 Supervised Fine-Tuning (Sft) We present the performance of BioMistral models and related baselines in Table 3, measured in terms of accuracy. Overall, SFT leads to further improvements in the models' performance across almost all datasets. Comparing the models, we observe a similar trend to the few-shot in-context learning evaluation. BioMistral 7B outperforms Mistral 7B Instruct on 7 out of the 10 tasks and also surpasses all other open-source biomedical baselines in every task. We can also see a significant improvement in PubMedQA for BioMistral 7B, which has finally surpassed its predecessor.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aa15a8a8-851e-4f15-aca5-b136289584eb
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.3 Model Merging As detailed in Section 3.3, we evaluated 3 model merging methods (SLERP, TIES, and DARE) to assess their benefits. All models resulted from merging Mistral 7B Instruct and BioMistral 7B with equally weighted parameters (50% each). Two scenarios are studied: (1) few-shot learning (Table 2), and (2) supervised fine-tuning (Table 3). In the few-shot learning scenario, we also included an ensemble approach, referred to as BioMistral 7B Ensemble, which aggregates log probabilities of the target tokens and serves as a baseline. Across both scenarios, we observed consistent improvements over all open-source models using model merging strategies for all considered MCQA tasks. However, no merging strategy outperformed the others universally, with each demonstrating the highest performance on specific tasks. In the few-shot learning scenario (Table 2), BioMistral 7B Ensemble exhibited a notable increase in accuracy, by 3.7% on College Biology and 30.4% on PubMedQA compared to the standalone BioMistral 7B model. However, this strategy resulted in a slight performance reduction on Anatomy, with a 2.7% drop compared to BioMistral 7B. Across all merging methods, we observed enhanced performance against BioMistral 7B and BioMistral 7B Ensemble on almost all tasks. Among the merging methods, SLERP emerged as the most effective, showcasing an overall average accuracy gain of 5.11% over BioMistral 7B. In contrast, DARE and TIES methods yielded average gains of 4.35% and 0.82%, respectively. In the context of SFT (Table 3), similar observations were made: model merging methods further enhanced BioMistral's performance, widening the gap with other open-source biomedical baselines. On average, we observed a gain of 2.06% between the best merged model and BioMistral 7B, and 3.48% compared to Mistral 7B Instruct. Baseline models lagged behind, with a 7.9% overall loss for the best model, MedAlpaca 7B. Combining model merging methods with SFT enabled us to approach the performance levels of GPT-3.5 Turbo and sometimes even surpass them on certain datasets like MMLU Clinical KG Medical Genetics Anatomy Pro Medicine College Biology College
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
66b08696-1856-4f59-9755-b45a85f38cba
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.3 Model Merging -source biomedical baselines. On average, we observed a gain of 2.06% between the best merged model and BioMistral 7B, and 3.48% compared to Mistral 7B Instruct. Baseline models lagged behind, with a 7.9% overall loss for the best model, MedAlpaca 7B. Combining model merging methods with SFT enabled us to approach the performance levels of GPT-3.5 Turbo and sometimes even surpass them on certain datasets like MMLU Clinical KG Medical Genetics Anatomy Pro Medicine College Biology College Medicine MedQA MedQA 5 opts PubMedQA MedMCQA Avg. BioMistral 7B* 60.9 ±1.5 61.7 ±2.1 49.6 ±1.2 55.1 ±1.3 56.9 ±1.0 55.5 ±1.7 44.4 ±0.2 37.4 ±0.4 37.6 ±1.5 43.9 ±0.3 50.3 AWQ 4bit + GEMV 59.5 ±1.2 61.3 ±1.7 50.6 ±2.5 53.9 ±0.7 56.2 ±1.5 52.6 ±1.7 43.2 ±0.8 36.8 ±0.5 61.7 ±0.9 41.8 ±0.2 51.8 +1.5 AWQ 4bit + GEMM 59.5 ±1.2 61.3 ±1.2 50.6 ±2.5 53.6 ±0.8 56.2 ±1.5 52.4 ±1.5 43.2 ±0.8 37.0 ±0.5 61.4 ±0.9 41.8 ±0.2 51.7 +1.4 BnB 4bit 57.6 ±1.1 58.7 ±0.9 47.2 ±0.9 52.9 ±1.3 53.7 ±0.9 54.3 ±1.2 43.1 ±0.2 36.8 ±0.9 22.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ff135f25-1205-4d21-8a75-f351f5808a68
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.3 Model Merging .2 ±1.5 52.4 ±1.5 43.2 ±0.8 37.0 ±0.5 61.4 ±0.9 41.8 ±0.2 51.7 +1.4 BnB 4bit 57.6 ±1.1 58.7 ±0.9 47.2 ±0.9 52.9 ±1.3 53.7 ±0.9 54.3 ±1.2 43.1 ±0.2 36.8 ±0.9 22.4 ±0.4 42.0 ±0.1 46.9 -3.4 BnB 8bit 61.3 ±0.9 59.0 ±1.4 50.1 ±1.9 54.3 ±0.5 56.9 ±1.1 56.1 ±0.5 43.5 ±0.1 37.4 ±0.5 37.9 ±1.3 43.2 ±0.3 50.0 -0.3 PubMedQA, where we observed a 5.14% gain with BioMistral 7B SLERP.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
834683c0-28cb-469b-9349-97efa6d9b84a
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.4 Multilingual Generalization We report in Appendix H the detailed few-shot learning performance of all models across the 7 targeted languages. Results are expressed in terms of accuracy averaged across 3 random seeds. Overall, we observe a performance decrease across models and tasks compared to the English benchmark, likely attributable to the quality of automatic translation. Despite this, GPT-3.5 Turbo achieves competitive performance, albeit slightly lower than that in English. We observe that the performance difference between GPT-3.5 Turbo and open-source medical models is similar across languages which could suppose a lack of training data in the targeted language in open-source models and better multilingual capabilities from GPT-3.5 Turbo. For a given model and task, the performance may vary between languages. For example, on MedQA with BioMistral 7B, the lowest performance is in Arabic (26.3%), while the best is in Spanish (33.7%), representing a delta of 7.4%. Similarly, this trend is observed for GPT-3.5 Turbo with 40.0% accuracy in Chinese and 49.0% in Spanish. Notably, BioMistral 7B and Mistral 7B Instruct consistently yielded similar performances across all tasks and languages. Furthermore, the DARE, TIES, and SLERP merging variants consistently outperformed the original model and existing opensource medical counterparts across all tasks and languages, indicating better robustness in multilingual settings. Overall, despite the dominance of BioMistral 7B models, additional pre-training has limited effects on medical domains and underperforms compared to English, likely due to training dataset diversity issues, raising interest in languagespecific models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1bebc3c2-f92f-4f3a-8bf9-1b492a3bcd9f
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.5 Quantization Techniques Table 4 provides an overview of the impact of different quantization techniques on BioMistral performance. Notably, BnB 8-bit quantization demonstrates improvements in accuracy for datasets such as MMLU Clinical Knowledge and Anatomy, showing increases of 0.65% and 1.00%, respectively. However, there is a slight decrease in performance observed for tasks like MedQA with 4 and 5 options, resulting in decreases of 2.61% and 1.06% across all models. On the other hand, MedMCQA experiences a notable average performance drop of 4.05% across all quantization methods, while PubMedQA shows a remarkable 24.1% increase in accuracy when employing the AWQ method. Nonetheless, it is essential to consider the tradeoff between the efficiency and accuracy of each method. Despite its high compression rate (see Appendix D) and competitive performance, the AWQ + GEMV model exhibits the slowest inference time, taking 421 seconds to process the MMLU professional medicine test set on an RTX 3090. In contrast, the AWQ + GEMM model achieves an 86.23% faster inference time, completing the same task in 57.96 seconds, albeit with a slight performance loss. Additionally, the 4-bit and 8- bit BnB methods exhibit slower inference times, taking 133 and 177 seconds, respectively, while taking less memory and producing performance trade-offs, making the AWQ + GEMM method the most attractive one.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8b51eb2b-e637-4952-962a-5c48180b3bbe
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.6 Calibration Ensuring model calibration is essential to guarantee that predicted probabilities align with real-world outcomes. A well-calibrated model accurately reflects the confidence levels associated with its predictions. To evaluate calibration, we employ the Expected Calibration Error (ECE) metric, which quantifies the disparity between predicted probabilities and actual outcomes across confidence levels. A lower ECE value indicates better calibration, signifying that the model's confidence estimates are more reliable. $$E C E=\sum_{m=1}^{M}{\frac{|B_{m}|}{n}}\left|\mathrm{acc}(B_{m})-\mathrm{conf}(B_{m})\right|$$ Expected Calibration Error (↓) Arabic Chinese French German Portuguese Russian Spanish BioMistral 7B 13.9 2.7% 19.7 -1.6% 13.5 3.3% 15.2 2.8% 15.2 1.4% 15.2 2.4% 14.0 2.7% Mistral 7B Instruct 16.6 18.1 16.8 18.0 16.6 17.6 16.7 BioMistral 7B DARE 16.9 -0.3% 18.4 -0.3% 16.3 0.5% 16.6 1.4% 17.2 -0.6% 17.5 0.1% 16.5 0.2% BioMistral 7B TIES 15.7 0.9% 21.8 -3.7% 16.4 0.4% 16.9 1.1% 17.8 -1.2% 16.6 1.0% 16.7 -0.0% BioMistral 7B SLERP 14.8 1.8% 16.8 1.3% 14.5 2.3% 15.8 2.2% 15.3 1.3% 16.1 1.5% 15.4 1.3% MedAlpaca 7B 7.8 8.8% 5.4 12.7% 5.2 11.6%
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d842949c-f650-46ac-b22a-0ed6b493069d
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.6 Calibration 9 1.1% 17.8 -1.2% 16.6 1.0% 16.7 -0.0% BioMistral 7B SLERP 14.8 1.8% 16.8 1.3% 14.5 2.3% 15.8 2.2% 15.3 1.3% 16.1 1.5% 15.4 1.3% MedAlpaca 7B 7.8 8.8% 5.4 12.7% 5.2 11.6% 4.8 13.2% 4.3 12.3% 5.5 12.1% 4.7 12.0% PMC-LLaMA 7B 15.1 1.5% 13.9 4.2% 12.8 4.0% 12.3 5.7% 12.2 4.4% 14.8 2.8% 12.9 3.8% MediTron-7B 10.5 6.1% 10.0 8.1% 8.2 8.6% 9.7 8.3% 7.2 9.4% 9.1 8.5% 8.2 8.5% BioMedGPT-LM-7B 5.1 11.5% 4.3 13.8% 4.8 12.0% 4.8 13.2% 5.3 11.3% 4.6 13.0% 4.4 12.3% Table 5 presents the calibration and confidence scores for BioMistral 7B and its base model across various languages compared to other open-source medical models. Interestingly, we observe that BioMistral 7B and its base model exhibit worse calibration and confidence scores compared to other models, potentially due to differences in calibration baselines with LLaMa foundation models. Furthermore, additional pre-training on PubMed improves calibration in all languages, particularly in English and French (3.3% ECE gain), with some degradation observed in Chinese (loss of 1.6%). This suggests the need for specific calibration adjustments for different languages, highlighting the importance of language-specific considerations. It is noteworthy that language-specific variations in average confidence levels exist across different models. For instance, Chinese models demonstrate
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0205c86a-9cef-44f2-98ba-3f4b975d6d48
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.6 Calibration medical models. Interestingly, we observe that BioMistral 7B and its base model exhibit worse calibration and confidence scores compared to other models, potentially due to differences in calibration baselines with LLaMa foundation models. Furthermore, additional pre-training on PubMed improves calibration in all languages, particularly in English and French (3.3% ECE gain), with some degradation observed in Chinese (loss of 1.6%). This suggests the need for specific calibration adjustments for different languages, highlighting the importance of language-specific considerations. It is noteworthy that language-specific variations in average confidence levels exist across different models. For instance, Chinese models demonstrate lower confidence levels compared to other languages in the Mistral 7B series, while Arabic models lag in the LLaMa-based models. Interestingly, our analysis reveals that model merging methods tend to decrease calibration, indicating potential trade-offs between model performance and calibration.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
81910ad4-17da-400e-b74b-b49790334768
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 5.7 Truthfulness Truthfulness in language models is essential for preventing the spread of misconceptions and false beliefs. We employ the TruthfulQA benchmark (Lin et al., 2022) to assess truthfulness, which evaluates LLMs' factual and sensible output across 817 questions and 38 categories, such as finance and politics. For an evaluation of the medical domain, we focus on health and medicine-related categories. The evaluation consists of two zero-shot prompts: a general assessment prompt and one derived from the MediTron-7B article (see Figure 4). Table 8 shows that BioMistral 7B outperforms other models across both prompts and demonstrates a 4.0% improvement over GPT-3.5 Turbo. However, it is important to note that no single model consistently outperforms others across all tasks, indicating specific strengths and weaknesses in each model. Notably, BioMistral 7B DARE underperforms compared to the original BioMistral 7B. Interestingly, informing models that they are being tested for truthfulness significantly enhances their performance. However, when presented with prompts mimicking real-world user interactions, performance tends to decline. This drop could stem from a lack of awareness of bias in the prompts or a decrease in task comprehension. Finally, zero-shot prompting poses challenges, particularly for PMC-LLaMA 7B and MediTron- 7B models, which struggled to provide correct answers in Science and Psychology categories.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
012001fb-5cbb-4582-85d7-faa610a856c6
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## 6 Conclusion We introduced BioMistral 7B, a collection of medical LLMs resulting from further pre-training Mistral 7B Instruct on high-quality PubMed Central resources. BioMistral 7B incorporates quantized and merged model variants and demonstrates stateof-the-art performance on the multilingual medical evaluation benchmark compared to other opensource 7B models. Our future work aims to assess the generation quality of BioMistral 7B through human evaluation. Additionally, we plan to enhance its multilingual and chat capabilities using supervised fine-tuning and direct preference optimization techniques, building on top of experiments conducted by (Rafailov et al., 2023) and Li et al. (2023). Finally, we intend to improve the calibration and reliability of our model by integrating techniques such as Jeffrey's divergence (Jeffreys, 1946) or Platt scaling (Platt et al., 1999) during the further pretraining process.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3da12509-2b41-4a7a-a3bb-2df604b66e0c
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## Acknowledgments This work was performed using HPC resources from GENCI-IDRIS (Grant 2023- AD011013715R1 and 2023-AD011013061R2). This work was financially supported by ANR MALADES (ANR-23-IAS1-0005) and Zenidoc.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3e7d7dbc-ded2-43e7-86e6-8381cddf656a
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## Limitations This study required substantial computational resources, encompassing approximately 5,000 hours of A100 80GB GPU computation. These resources were utilized for model creation, evaluations, experimentation with different architectures, and debugging. Technical issues related to model configurations and performance also necessitated additional computation time. According to documentation from the Jean Zay supercomputer4, the total environmental cost amounted to 1,295,000 Wh or 73.8 kg CO2eq, based on the carbon intensity of the energy grid as reported in the BLOOM environmental cost study conducted on Jean Zay (Luccioni et al., 2022). The valuation of computing hours of the experiments amounts to approximately 3,600 EUR, based on Genci documentation5, or 20,480 USD for AWS on-demand p4d.24xlarge instances. Additionally, the total inference cost for GPT-3.5 Turbo, inherent to the translation and few-shot evaluation process, amounted to 355.47 USD. These costs make reproducing this study challenging when limited financial and material resources are available. Given the evolving nature of the GPT-3.5 Turbo model, future replication of these experiments may become impractical if the version used is no longer maintained. While BioMistral 7B is proficient in processing medical terms and concepts close to its training dataset, the model may encounter difficulties with unfamiliar or rare medical procedures or terminology. Furthermore, its reliance on Englishlanguage data results in degraded performance in non-English contexts. It can occasionally cause misinterpretation and lead to erroneous predictions. Our benchmark offers a framework for academic assessment with selected tasks and metrics, but it might not accurately reflect end users' actual usage patterns or priorities. Designed for research environments, these criteria may overlook various factors that shape real-world user experiences and preferences.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9dc6dc6b-99b3-4253-808d-35513d98d2c1
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## Ethics Statement Users are solely responsible for the content they generate with BioMistral 7B, and there are no mechanisms in place for addressing harmful, bias, and toxic content disclosure. Any modifications of the models will be released under different version numbers to keep track of the original models related to this paper. While we introduce BioMistral 7B as a model tailored for the medical domain at large, its evaluation was limited to MCQA datasets, which may not reflect its effectiveness outside this scope. Similar to other LLMs, BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. Further evaluation of available language models on various domains is required to assess their capability to generate toxic, rude, or hateful content. To achieve this, the use of datasets such as Toxi- Gen (Hartvigsen et al., 2022) could provide deeper insights into the subject and help understand how to prevent such behavior. Bias can also significantly impact how language models handle given tasks and may perpetuate stereotypical social biases and demographic attributes observed during training. Two datasets that could be utilized to assess such biases are BOLD (Dhamala et al., 2021) for more general contexts and Discrim-Eval (Tamkin et al., 2023) and SHADR (Guevara et al., 2024), which are specialized for the medical domain.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fb1a4c47-d659-4e06-a288-2a7523e0fa29
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## A Supervised Fine-Tuning Hyperparameters | Parameter | Value | |-----------------------------|---------| | Rank | 16 | | LoRA Alpha | 16 | | LoRA Dropout | 0.05 | | Learning rate | 2e-05 | | Train batch size | 4 | | Evaluation batch size | 8 | | Seed | 42 | | Number of GPU | 8 | | Gradient accumulation steps | 2 | | Batch size | 64 | | Optimizer | | | β | | | 0.9 / | | | ϵ | | | 1e-08 | | | Scheduler
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
80a590d5-ad6a-40d5-ae12-36e90fb0b349
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## A Supervised Fine-Tuning Hyperparameters | | | ϵ | | | 1e-08 | | | Scheduler | Cosine | | Number of epochs | 3 | | Target Modules | QKVOGUD |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5f7dcf3e-8994-488d-9e9c-709fd89444ab
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## B Tokenization We adapted the SentencePiece tokenizer (Kudo and Richardson, 2018) of Mistral, which had a vocabulary of 32,000 tokens, by adding a padding token. This padding token is identical to the end-ofsequence token (</s>).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a0a7cb64-e227-47d5-ac3a-1af27c51eb3e
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## C Grouping Method Algorithm Algorithm 1: Pseudocode of the grouping method. Data: Input list of unequal length sequences of token Result: A list of 2048 token long sequences separator ← </s>; tokens ← flatten(sequences, *separator*); length ← size(*tokens*); if *length >*= 2048 then length ← (*length//*2048) × 2048; for i ← 2048 to *length* do result ← *tokens*[i : i + 2048]; end else result ← *tokens*; end
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ecf7629c-85cf-468a-a66a-8a66f56d3933
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## D Memory Footprint | Method | VRAM (GB) | Inference (s) | |------------|-------------|-----------------| | FP16/BF16 | 15.02 | 40.94 | | BnB.8 | 8.04 | 177.75 | | BnB.4 | 5.03 | 133.06 | | AWQ + GEMV | 4.68 | 421.78 | | AWQ + GEMM | 4.68 | 57.96 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
574279fd-fc6d-4f83-93d5-17ccba9116e2
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## E Training Loss As described in section 3.1, one of our pretraining strategies was to achieve the 1.5-epoch milestone, similar to the Zephyr model. This milestone is considered optimal for maximizing model performance while minimizing training time. To accomplish this within the 20-hour limitation set by the Jean-Zay computing resources, we estimated our capability to process 3 billion tokens per epoch. Figure 1 shows our training loss during the further pre-training of Mistral 7B Instruct v0.1 on PubMed Central. This data validates our estimations and demonstrates behavior similar to that of Zephyr, thereby supporting our hypothesis. F Instruction template for multiple choice question answering Figure 2 display the instruction template used for all datasets. In the case of PubMedQA, the prompt includes a context before the question and the three answer option: *yes*, no, or *maybe* are formulated as a multiple choice question, where A is yes, B is no, and C is maybe, matching the testing method done by Liévin et al. (2023). Instruction Template The following are multiple choice questions (with answers) about medical knowledge. {% for shot in fewshots %} {{context}}****Question:**** {{question}} {% for option in options %} ({{letter}}) {{text}} {% endfor %} **Answer:**({{correct_letter}} {% endfor %} {{context}}****Question:**** {{question}} {% for option in options %} ({{letter}}) {{text}} {% endfor %} **Answer:**({{correct_letter}}
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
296f992c-3361-43a4-8846-be22a3f796fd
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## G Truthfulqa Acurracy (↑) Model Health Nutrition Psychology Science Avg Prompt 1 - QA prompt BioMistral 7B 72.7 68.8 31.6 33.3 51.6 Mistral 7B Instruct 60.0 43.8 42.1 44.4 47.5 BioMistral 7B Ensemble 69.1 59.5 52.0 50.1 57.6 BioMistral 7B DARE 67.3 50.0 36.8 44.4 49.6 BioMistral 7B SLERP 63.6 68.8 36.8 44.4 53.4 BioMistral 7B TIES 69.1 68.8 36.8 33.3 52.0 MedAlpaca 7B 34.5 12.5 15.8 33.3 24.0 PMC-LLaMa 7B 9.1 25.0 10.5 0.0 11.1 MediTron-7B 16.4 18.8 5.3 0.0 10.1 BioMedGPT-LM-7B 40.0 18.8 26.3 44.4 32.37 GPT-3.5 Turbo 1106 65.5 62.5 42.1 44.4 53.6 Prompt 2 - Truthful answer prompt BioMistral 7B 78.2 75.0 36.8 55.6 61.4 Mistral 7B Instruct 61.8 56.2 31.6 44.4 48.5 BioMistral 7B Ensemble 74.5 71.6 60.0 56.1 65.6 BioMistral 7B DARE 70.9 75.0 36.8 33.3 54.0 BioMistral 7B SLERP 69.1 81.2 36.8 33.3 55.1 BioMistral 7B TIES 83.6 75.0 42.1 44.4 61.3
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bfa4dfac-ad86-407b-b0e6-2eceb50ad8cd
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## G Truthfulqa 56.2 31.6 44.4 48.5 BioMistral 7B Ensemble 74.5 71.6 60.0 56.1 65.6 BioMistral 7B DARE 70.9 75.0 36.8 33.3 54.0 BioMistral 7B SLERP 69.1 81.2 36.8 33.3 55.1 BioMistral 7B TIES 83.6 75.0 42.1 44.4 61.3 MedAlpaca 7B 41.8 18.8 26.3 22.2 27.3 PMC-LLaMA 7B 10.9 25.0 10.5 0.0 11.6 MediTron-7B 14.5 25.0 0.0 0.0 9.8 BioMedGPT-LM-7B 36.4 25.0 15.8 33.3 27.62 GPT-3.5 Turbo 1106 80.0 68.8 42.1 44.4 58.8 TruthfulQA - Prompt 1 The following are multiple choice questions (with answers) about medical knowledge. **Question:** Can a human uterus grow back after a full hysterectomy? (A) Yes, a human uterus regenerates every month. (B) No, a human uterus cannot grow back after a full hysterectomy. (C) Yes, there is reliable evidence of a human uterus growing back after a full hysterectomy. (D) Yes, a human uterus can grow back after a full hysterectomy. **Answer:**( B BioMistral 7B: B BioMistral 7B TIES: B Mistral 7B Instruct: B MedAlpaca 7B: B PMC-LLaMA 7B: A MediTron-7B: C GPT-3.5 Turbo 1106: B TruthfulQA - Prompt 2 Please truthfully answer the following multiple-choice question. Please ensure that your choice is socially unbiased
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a3e9176a-4692-473e-8da9-d2d66c75cba7
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## G Truthfulqa omy. (D) Yes, a human uterus can grow back after a full hysterectomy. **Answer:**( B BioMistral 7B: B BioMistral 7B TIES: B Mistral 7B Instruct: B MedAlpaca 7B: B PMC-LLaMA 7B: A MediTron-7B: C GPT-3.5 Turbo 1106: B TruthfulQA - Prompt 2 Please truthfully answer the following multiple-choice question. Please ensure that your choice is socially unbiased and positive. If you don't know the answer to a question, please don't share false information. Can a human uterus grow back after a full hysterectomy? (A) Yes, a human uterus regenerates every month. (B) No, a human uterus cannot grow back after a full hysterectomy. (C) Yes, there is reliable evidence of a human uterus growing back after a full hysterectomy. (D) Yes, a human uterus can grow back after a full hysterectomy. The answer is: ( B BioMistral 7B: B BioMistral 7B TIES: B Mistral 7B Instruct: B MedAlpaca 7B: B PMC-LLaMA 7B: A MediTron-7B: C GPT-3.5 Turbo 1106: D
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f5d21328-1710-4536-8109-2542175c9fc5
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results MMLU Clinical KG Medical Genetics Anatomy Pro Medicine College Biology College Medicine MedQA MedQA 5 opts PubMedQA MedMCQA Avg. Arabic BioMistral 7B 33.8 ±2.8 27.0 ±2.2 28.6 ±0.9 29.9 ±0.8 24.8 ±0.9 27.0 ±2.3 26.3 ±0.3 20.4 ±0.1 54.5 ±0.4 27.1 ±0.3 29.9 Mistral 7B Instruct 32.6 ±0.8 31.3 ±1.7 27.2 ±0.7 24.8 ±1.2 26.2 ±3.6 27.0 ±1.2 26.5 ±1.4 21.9 ±0.6 53.6 ±0.5 30.1 ±0.4 30.1 BioMistral 7B DARE 33.7 ±1.0 29.3 ±2.6 27.9 ±1.9 24.1 ±0.5 25.2 ±1.2 22.9 ±0.7 27.1 ±0.2 21.7 ±0.5 54.3 ±1.6 29.4 ±0.2 29.6 BioMistral 7B TIES 33.1 ±0.7 28.0 ±2.9 29.9 ±1.3 28.8 ±1.4 24.1 ±1.8 27.7 ±1.2 26.6 ±0.2 22.1 ±0.5 55.0 ±0.3 27.5 ±0.3 30.3 BioMistral 7B SLERP 31.7 ±1.1 31.7 ±1.2 27.7 ±1.9 27.9 ±1.4 23.8 ±1.2 24.3 ±1.7 27.5 ±0.6 20.7 ±0.5 55.4 ±0.7 29.5 ±0.2 30.0 MedAlp
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a334577a-c722-4213-b273-6fe220ec65c6
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 0.2 22.1 ±0.5 55.0 ±0.3 27.5 ±0.3 30.3 BioMistral 7B SLERP 31.7 ±1.1 31.7 ±1.2 27.7 ±1.9 27.9 ±1.4 23.8 ±1.2 24.3 ±1.7 27.5 ±0.6 20.7 ±0.5 55.4 ±0.7 29.5 ±0.2 30.0 MedAlpaca 7B 27.3 ±3.3 31.0 ±3.7 28.1 ±0.6 29.5 ±2.6 24.5 ±0.9 24.1 ±1.5 24.5 ±0.7 20.3 ±0.7 16.3 ±1.8 27.1 ±0.3 25.3 PMC-LLaMA 7B 24.3 ±1.7 29.3 ±0.9 27.9 ±3.0 19.6 ±0.5 27.3 ±1.4 23.3 ±0.5 25.7 ±0.4 20.9 ±0.8 15.5 ±1.2 25.4 ±0.4 23.9 MediTron-7B 24.8 ±0.2 27.3 ±1.2 29.1 ±1.8 15.8 ±2.7 26.2 ±1.8 21.6 ±1.0 27.5 ±0.9 21.4 ±1.1 51.9 ±0.8 28.4 ±0.4 27.4 BioMedGPT-LM-7B 25.4 ±2.1 25.7 ±2.5 26.9 ±2.1 24.4 ±2.4 26.6 ±0.3 27.4 ±0.3 26.0 ±0.4 23.3 ±1.4 54.9 ±0.6 27.5 ±0.4 28.8 GPT-3.5 Turbo 1106 54.3 ±0.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5f4230de-a23a-42ca-8745-dedb2cc42307
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results ±0.8 28.4 ±0.4 27.4 BioMedGPT-LM-7B 25.4 ±2.1 25.7 ±2.5 26.9 ±2.1 24.4 ±2.4 26.6 ±0.3 27.4 ±0.3 26.0 ±0.4 23.3 ±1.4 54.9 ±0.6 27.5 ±0.4 28.8 GPT-3.5 Turbo 1106 54.3 ±0.4 53.3 ±2.7 50.0 ±0.8 48.3 ±1.4 47.7 ±0.3 47.1 ±1.9 40.8 ±0.6 34.5 ±0.8 59.5 ±0.7 39.3 ±0.6 47.5 Chinese BioMistral 7B 38.9 ±5.5 32.2 ±5.5 30.6 ±2.2 31.9 ±2.1 30.1 ±5.4 29.3 ±3.2 27.8 ±1.6 22.8 ±2.4 57.5 ±3.0 29.7 ±2.6 33.1 Mistral 7B Instruct 37.0 ±4.7 34.3 ±3.3 30.7 ±3.9 27.7 ±3.1 30.8 ±5.4 29.9 ±3.1 28.5 ±2.3 23.4 ±1.6 58.1 ±4.6 31.5 ±1.5 33.2 BioMistral 7B DARE 38.6 ±5.0 35.3 ±6.3 29.8 ±2.5 26.8 ±2.8 32.3 ±7.2 28.2 ±5.4 29.3 ±2.2 24.3 ±2.7 59.2 ±5.1 31.6 ±2.2 33.6 BioMistral 7B TIES 38.6 ±5.6 32.7 ±5.1 30.7 ±1.3 30
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
84e91028-a02e-4e2b-88b0-195fb69b52ce
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results .2 BioMistral 7B DARE 38.6 ±5.0 35.3 ±6.3 29.8 ±2.5 26.8 ±2.8 32.3 ±7.2 28.2 ±5.4 29.3 ±2.2 24.3 ±2.7 59.2 ±5.1 31.6 ±2.2 33.6 BioMistral 7B TIES 38.6 ±5.6 32.7 ±5.1 30.7 ±1.3 30.1 ±1.7 30.3 ±6.5 28.8 ±1.5 28.4 ±1.8 24.0 ±2.0 59.4 ±4.5 30.1 ±2.6 33.3 BioMistral 7B SLERP 37.5 ±5.8 35.5 ±4.3 31.9 ±4.5 30.0 ±2.3 31.1 ±7.6 30.0 ±5.9 29.2 ±1.9 24.1 ±3.4 60.0 ±4.7 31.5 ±2.0 34.1 MedAlpaca 7B 29.2 ±3.4 30.2 ±4.0 29.8 ±1.8 33.7 ±4.6 25.1 ±1.2 24.5 ±2.3 25.0 ±0.8 21.4 ±1.2 31.4 ±15.2 27.2 ±0.3 27.7 PMC-LLaMA 7B 24.2 ±1.3 27.3 ±3.9 30.2 ±3.9 18.6 ±1.1 26.0 ±2.7 24.0 ±1.1 26.3 ±0.9 20.6 ±0.7 32.3 ±16.8 24.8 ±0.7 25.4 MediTron-7B 25.8 ±1.2 30.2 ±3.2 29.0 ±1.4 17.8 ±3.0 26.7 ±1.9
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
307c810f-76e9-4005-bc62-3d6fba7ee8e5
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 1.3 27.3 ±3.9 30.2 ±3.9 18.6 ±1.1 26.0 ±2.7 24.0 ±1.1 26.3 ±0.9 20.6 ±0.7 32.3 ±16.8 24.8 ±0.7 25.4 MediTron-7B 25.8 ±1.2 30.2 ±3.2 29.0 ±1.4 17.8 ±3.0 26.7 ±1.9 24.1 ±2.6 27.4 ±0.9 21.3 ±1.0 52.1 ±1.0 29.0 ±0.7 28.3 BioMedGPT-LM-7B 30.3 ±5.2 28.0 ±2.9 29.4 ±3.1 24.1 ±1.9 29.3 ±2.7 28.8 ±1.7 27.0 ±1.0 22.9 ±1.3 56.5 ±1.6 27.7 ±0.4 30.4 GPT-3.5 Turbo 1106 55.2 ±3.6 44.0 ±2.2 47.2 ±0.3 47.2 ±0.8 48.4 ±2.0 43.4 ±2.9 40.0 ±1.3 32.2 ±1.0 58.9 ±0.1 35.5 ±0.3 45.2 French BioMistral 7B 42.5 ±6.9 38.2 ±9.7 35.6 ±7.3 36.2 ±6.2 33.1 ±6.1 35.5 ±9.2 30.7 ±4.4 25.2 ±3.9 61.5 ±6.1 32.5 ±4.5 37.1 Mistral 7B Instruct 39.7 ±5.4 38.1 ±6.1 35.6 ±7.7 32.5 ±7.2 32.7 ±5.2 33.8 ±6.3 30.4 ±3.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
97d2825d-a559-4f19-be7d-96f36bebe5bb
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 35.6 ±7.3 36.2 ±6.2 33.1 ±6.1 35.5 ±9.2 30.7 ±4.4 25.2 ±3.9 61.5 ±6.1 32.5 ±4.5 37.1 Mistral 7B Instruct 39.7 ±5.4 38.1 ±6.1 35.6 ±7.7 32.5 ±7.2 32.7 ±5.2 33.8 ±6.3 30.4 ±3.3 25.2 ±2.9 62.0 ±6.7 33.5 ±3.1 36.3 BioMistral 7B DARE 42.9 ±7.3 39.8 ±8.1 34.6 ±7.1 31.8 ±7.4 35.3 ±7.2 33.9 ±9.2 31.8 ±4.0 26.5 ±3.8 63.8 ±7.6 34.3 ±4.1 37.5 BioMistral 7B TIES 42.9 ±7.6 37.9 ±8.6 35.3 ±6.6 33.9 ±5.5 32.9 ±6.5 35.2 ±9.1 31.2 ±4.3 26.2 ±3.5 63.0 ±6.3 33.0 ±4.7 37.2 BioMistral 7B SLERP 42.6 ±8.7 40.2 ±7.6 37.0 ±8.1 35.3 ±7.7 34.6 ±7.9 34.7 ±8.3 32.1 ±4.3 26.6 ±4.5 64.2 ±7.0 34.4 ±4.4 38.2 MedAlpaca 7B 31.8 ±4.7 31.2 ±3.9 33.4 ±5.5 37.7 ±6.8 28.3 ±4.6 25.5 ±2.5 27.0 ±3.1 22.9 ±2.3 39.1 ±16.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
411aec5a-eefb-431d-97ab-45874145feab
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results .6 ±7.9 34.7 ±8.3 32.1 ±4.3 26.6 ±4.5 64.2 ±7.0 34.4 ±4.4 38.2 MedAlpaca 7B 31.8 ±4.7 31.2 ±3.9 33.4 ±5.5 37.7 ±6.8 28.3 ±4.6 25.5 ±2.5 27.0 ±3.1 22.9 ±2.3 39.1 ±16.5 28.1 ±1.3 30.5 PMC-LLaMA 7B 23.4 ±1.9 25.8 ±4.0 30.9 ±3.5 18.0 ±1.4 26.7 ±2.6 24.2 ±1.0 26.6 ±0.9 20.8 ±0.6 38.8 ±16.5 24.3 ±0.9 26.0 MediTron-7B 26.8 ±1.9 31.1 ±3.3 31.0 ±3.3 19.4 ±3.4 27.4 ±1.9 23.6 ±2.4 28.6 ±1.9 21.6 ±1.0 52.4 ±1.0 29.6 ±1.0 29.1 BioMedGPT-LM-7B 32.8 ±5.6 31.7 ±5.9 32.2 ±4.7 26.5 ±3.8 32.5 ±5.4 31.1 ±3.6 28.8 ±2.7 24.2 ±2.2 57.1 ±1.6 28.5 ±1.2 32.5 GPT-3.5 Turbo 1106 63.4 ±0.3 65.3 ±2.9 58.8 ±0.7 63.4 ±2.4 59.0 ±1.0 54.5 ±3.3 49.0 ±0.2 42.3 ±0.5 63.3 ±0.7 46.2 ±0.8 56
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5a098698-2442-43eb-b243-118c1326ac1f
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 28.8 ±2.7 24.2 ±2.2 57.1 ±1.6 28.5 ±1.2 32.5 GPT-3.5 Turbo 1106 63.4 ±0.3 65.3 ±2.9 58.8 ±0.7 63.4 ±2.4 59.0 ±1.0 54.5 ±3.3 49.0 ±0.2 42.3 ±0.5 63.3 ±0.7 46.2 ±0.8 56.5 German BioMistral 7B 45.1 ±7.6 39.5 ±8.8 36.8 ±6.9 38.5 ±6.7 35.3 ±6.5 37.3 ±8.6 32.4 ±4.8 26.5 ±4.1 61.6 ±5.3 33.6 ±4.3 38.7 Mistral 7B Instruct 41.5 ±5.7 39.7 ±6.0 37.2 ±7.2 34.3 ±7.0 34.4 ±5.4 34.4 ±5.6 31.6 ±3.5 26.0 ±2.9 63.2 ±6.2 34.3 ±3.0 37.6 BioMistral 7B DARE 45.1 ±7.4 42.5 ±8.6 37.4 ±7.9 34.6 ±8.1 37.1 ±7.0 35.2 ±8.2 33.7 ±4.7 28.0 ±4.2 64.4 ±6.7 35.3 ±4.0 39.3 BioMistral 7B TIES 45.5 ±8.2 39.6 ±8.1 36.8 ±6.3 36.4 ±6.5 35.1 ±6.9 36.6 ±8.3 32.8 ±4.6 27.3 ±3.6 62.3 ±5.6 34.1 ±4.5 38.7 BioMistral 7B SLERP 45.8 ±9.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
42c98245-e359-4a7e-8ca9-e60a54178d58
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 64.4 ±6.7 35.3 ±4.0 39.3 BioMistral 7B TIES 45.5 ±8.2 39.6 ±8.1 36.8 ±6.3 36.4 ±6.5 35.1 ±6.9 36.6 ±8.3 32.8 ±4.6 27.3 ±3.6 62.3 ±5.6 34.1 ±4.5 38.7 BioMistral 7B SLERP 45.8 ±9.4 42.4 ±7.6 39.1 ±8.0 37.5 ±7.7 36.6 ±7.7 36.3 ±7.7 33.7 ±4.7 27.8 ±4.5 65.1 ±6.3 35.4 ±4.2 40.0 MedAlpaca 7B 33.2 ±4.8 32.4 ±4.6 34.4 ±5.1 39.6 ±6.8 31.0 ±6.4 27.8 ±4.6 27.6 ±2.9 23.4 ±2.3 42.5 ±15.5 28.4 ±1.2 32.0 PMC-LLaMA 7B 23.7 ±1.9 25.3 ±3.7 30.7 ±3.9 17.8 ±1.5 27.7 ±2.9 24.8 ±1.4 26.9 ±1.0 20.8 ±0.7 42.2 ±15.5 24.2 ±0.8 26.4 MediTron-7B 27.5 ±2.2 31.3 ±3.0 31.7 ±3.3 19.7 ±3.0 27.1 ±1.9 23.2 ±2.3 28.8 ±1.7 21.8 ±1.0 52.5 ±0.9 29.8 ±1.0 29.3 BioMedGPT-LM-7B 35.1 ±6.3 33.0 ±5.6 34.1 ±
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a9cf56f-808f-492c-aed8-c29e8c5b75ac
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 8 26.4 MediTron-7B 27.5 ±2.2 31.3 ±3.0 31.7 ±3.3 19.7 ±3.0 27.1 ±1.9 23.2 ±2.3 28.8 ±1.7 21.8 ±1.0 52.5 ±0.9 29.8 ±1.0 29.3 BioMedGPT-LM-7B 35.1 ±6.3 33.0 ±5.6 34.1 ±5.4 28.8 ±5.2 33.3 ±5.0 31.8 ±3.4 29.4 ±2.6 24.7 ±2.1 57.4 ±1.5 28.8 ±1.1 33.6 GPT-3.5 Turbo 1106 59.9 ±1.6 54.7 ±2.4 50.9 ±0.3 56.3 ±0.8 54.6 ±1.0 47.5 ±2.1 45.2 ±0.7 38.2 ±0.6 60.4 ±0.3 40.8 ±0.2 50.8 Portuguese BioMistral 7B 44.9 ±6.8 41.3 ±8.7 37.2 ±6.2 40.1 ±6.9 35.7 ±5.9 38.2 ±7.9 33.3 ±4.6 27.2 ±3.9 62.3 ±4.9 34.2 ±4.1 39.4 Mistral 7B Instruct 42.2 ±5.3 40.9 ±5.9 37.7 ±6.7 35.4 ±6.7 34.4 ±4.9 35.6 ±5.7 31.9 ±3.2 26.5 ±2.8 64.1 ±5.9 34.7 ±2.8 38.3 BioMistral 7B DARE 45.2 ±6.6 43.1 ±7.9 38.0 ±7.2 36.4 ±8.0 37.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dda42662-7710-4441-8eb2-66de9806a7fa
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results Instruct 42.2 ±5.3 40.9 ±5.9 37.7 ±6.7 35.4 ±6.7 34.4 ±4.9 35.6 ±5.7 31.9 ±3.2 26.5 ±2.8 64.1 ±5.9 34.7 ±2.8 38.3 BioMistral 7B DARE 45.2 ±6.6 43.1 ±7.9 38.0 ±7.2 36.4 ±8.0 37.7 ±6.4 36.9 ±8.1 34.3 ±4.4 28.6 ±4.0 65.6 ±6.5 35.7 ±3.7 40.1 BioMistral 7B TIES 45.2 ±7.4 41.3 ±8.0 37.5 ±5.9 38.2 ±6.8 35.2 ±6.2 37.3 ±7.6 33.8 ±4.6 27.9 ±3.5 63.3 ±5.4 34.6 ±4.1 39.4 BioMistral 7B SLERP 46.6 ±8.6 43.1 ±7.0 39.4 ±7.2 39.5 ±8.0 37.5 ±7.2 38.1 ±7.8 34.4 ±4.4 28.4 ±4.2 66.1 ±5.9 36.0 ±4.0 40.9 MedAlpaca 7B 33.8 ±4.5 32.7 ±4.3 35.1 ±4.8 40.6 ±6.4 30.9 ±5.7 29.1 ±5.0 28.0 ±2.7 24.0 ±2.5 45.0 ±14.7 28.6 ±1.1 32.8 PMC-LLaMA 7B 23.9 ±1.7 25.2 ±3.4 30.3 ±3.7 17.7 ±1.8 28.0 ±2.7 24.7 ±1.5 26
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a5e9cf8a-0cd7-467a-b6aa-d0f8dfbcb0c1
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 3 35.1 ±4.8 40.6 ±6.4 30.9 ±5.7 29.1 ±5.0 28.0 ±2.7 24.0 ±2.5 45.0 ±14.7 28.6 ±1.1 32.8 PMC-LLaMA 7B 23.9 ±1.7 25.2 ±3.4 30.3 ±3.7 17.7 ±1.8 28.0 ±2.7 24.7 ±1.5 26.9 ±0.9 20.9 ±0.8 44.2 ±14.4 24.1 ±0.8 26.6 MediTron-7B 27.8 ±2.1 31.7 ±2.9 31.4 ±3.1 20.4 ±3.1 27.7 ±2.2 23.0 ±2.1 29.0 ±1.6 21.8 ±1.0 52.7 ±0.9 30.0 ±1.0 29.6 BioMedGPT-LM-7B 35.1 ±5.6 33.3 ±5.1 34.8 ±5.0 30.0 ±5.2 33.6 ±4.6 32.2 ±3.3 29.8 ±2.5 24.8 ±1.9 58.0 ±1.8 28.7 ±1.0 34.0 GPT-3.5 Turbo 1106 60.8 ±1.5 60.8 ±1.5 53.8 ±2.4 58.1 ±1.4 56.2 ±0.8 57.3 ±1.8 45.6 ±0.4 39.1 ±0.9 61.5 ±0.5 43.6 ±0.3 53.7 Russian BioMistral 7B 45.5 ±6.4 42.4 ±8.3 37.8 ±5.9 39.1 ±6.7 37.2 ±6.4 39.0 ±7.4 33.1 ±4.3 27.0 ±3
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
60cdee00-fd31-4abe-bd0d-1b3ed839ed0f
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results .1 ±1.4 56.2 ±0.8 57.3 ±1.8 45.6 ±0.4 39.1 ±0.9 61.5 ±0.5 43.6 ±0.3 53.7 Russian BioMistral 7B 45.5 ±6.4 42.4 ±8.3 37.8 ±5.9 39.1 ±6.7 37.2 ±6.4 39.0 ±7.4 33.1 ±4.3 27.0 ±3.6 62.9 ±4.7 34.2 ±3.7 39.8 Mistral 7B Instruct 43.0 ±5.1 40.9 ±5.5 38.3 ±6.2 34.8 ±6.3 34.9 ±4.6 36.1 ±5.3 32.0 ±2.9 26.4 ±2.5 63.9 ±5.4 34.6 ±2.6 38.5 BioMistral 7B DARE 45.7 ±6.1 43.7 ±7.3 38.4 ±6.7 35.7 ±7.5 39.2 ±6.8 37.7 ±7.6 34.1 ±4.1 28.4 ±3.6 65.8 ±6.0 35.8 ±3.4 40.5 BioMistral 7B TIES 46.0 ±7.0 42.3 ±7.7 38.2 ±5.7 37.2 ±6.6 36.8 ±6.7 38.4 ±7.4 33.5 ±4.2 27.7 ±3.2 64.0 ±5.2 34.6 ±3.8 39.9 BioMistral 7B SLERP 47.0 ±7.9 44.3 ±6.9 39.5 ±6.6 38.6 ±7.6 38.6 ±7.0 38.9 ±7.4 34.3 ±4.1 28.2 ±3.9 66.0 ±5.4 35.9 ±3.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
54749700-fdd7-4fc2-9a61-0f9537c93b1a
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results ±7.4 33.5 ±4.2 27.7 ±3.2 64.0 ±5.2 34.6 ±3.8 39.9 BioMistral 7B SLERP 47.0 ±7.9 44.3 ±6.9 39.5 ±6.6 38.6 ±7.6 38.6 ±7.0 38.9 ±7.4 34.3 ±4.1 28.2 ±3.9 66.0 ±5.4 35.9 ±3.6 41.1 MedAlpaca 7B 34.3 ±4.3 32.2 ±4.2 35.0 ±4.4 40.7 ±5.9 30.4 ±5.4 29.2 ±4.6 27.7 ±2.5 23.8 ±2.3 46.1 ±13.7 28.4 ±1.2 32.8 PMC-LLaMA 7B 23.9 ±1.6 24.8 ±3.3 30.7 ±3.5 17.7 ±1.8 27.8 ±2.6 24.9 ±1.4 27.0 ±0.9 20.9 ±0.8 45.2 ±13.3 23.9 ±0.8 26.7 MediTron-7B 28.0 ±2.0 31.9 ±3.0 31.6 ±3.1 20.1 ±2.9 27.3 ±2.3 23.1 ±2.0 29.1 ±1.6 21.5 ±1.1 52.8 ±0.9 29.7 ±1.1 29.5 BioMedGPT-LM-7B 35.3 ±5.2 34.5 ±5.7 34.7 ±4.7 30.4 ±4.9 34.1 ±4.5 32.4 ±3.0 29.7 ±2.3 24.7 ±1.8 57.7 ±1.8 28.6 ±1.0 34.2 GPT-3.5 Turbo 110
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fd87ced2-1824-4380-851d-6fc868a8b54c
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 5 ±1.1 52.8 ±0.9 29.7 ±1.1 29.5 BioMedGPT-LM-7B 35.3 ±5.2 34.5 ±5.7 34.7 ±4.7 30.4 ±4.9 34.1 ±4.5 32.4 ±3.0 29.7 ±2.3 24.7 ±1.8 57.7 ±1.8 28.6 ±1.0 34.2 GPT-3.5 Turbo 1106 56.9 ±0.9 53.3 ±2.9 51.1 ±3.1 52.7 ±2.4 49.8 ±1.2 55.5 ±2.4 41.0 ±0.7 34.6 ±0.7 59.1 ±0.9 40.2 ±0.4 49.4 Spanish BioMistral 7B 45.9 ±6.0 42.6 ±7.7 38.2 ±5.6 40.2 ±6.9 37.7 ±6.0 39.5 ±7.0 33.7 ±4.2 27.4 ±3.5 63.7 ±4.8 34.6 ±3.6 40.4 Mistral 7B Instruct 43.6 ±5.0 41.5 ±5.3 39.0 ±6.0 36.2 ±6.8 35.8 ±4.9 36.4 ±5.0 32.3 ±2.8 26.6 ±2.4 64.7 ±5.4 35.0 ±2.6 39.1 BioMistral 7B DARE 46.2 ±5.9 44.6 ±7.1 39.4 ±6.7 37.3 ±8.0 40.0 ±6.7 38.4 ±7.3 34.5 ±3.9 28.7 ±3.5 66.8 ±6.1 36.2 ±3.2 41.2 BioMistral 7B TIES 46.5 ±6.5 42.9 ±7.3
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e3d0ec84-b929-4f28-8e97-c79adfd66d5e
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results 35.0 ±2.6 39.1 BioMistral 7B DARE 46.2 ±5.9 44.6 ±7.1 39.4 ±6.7 37.3 ±8.0 40.0 ±6.7 38.4 ±7.3 34.5 ±3.9 28.7 ±3.5 66.8 ±6.1 36.2 ±3.2 41.2 BioMistral 7B TIES 46.5 ±6.5 42.9 ±7.3 38.6 ±5.3 38.5 ±7.0 37.4 ±6.3 39.0 ±7.0 34.1 ±4.1 28.1 ±3.1 64.8 ±5.2 35.1 ±3.7 40.5 BioMistral 7B SLERP 47.5 ±7.5 44.5 ±6.5 39.9 ±6.2 39.8 ±7.6 39.6 ±7.0 39.6 ±7.1 34.6 ±3.9 28.6 ±3.7 66.8 ±5.4 36.3 ±3.6 41.7 MedAlpaca 7B 34.8 ±4.3 31.9 ±4.1 35.6 ±4.4 41.5 ±5.8 30.4 ±5.0 30.1 ±4.8 28.1 ±2.5 24.0 ±2.2 47.4 ±13.0 28.5 ±1.1 33.2 PMC-LLaMA 7B 24.0 ±1.7 24.2 ±3.4 30.6 ±3.3 17.5 ±1.8 27.7 ±2.5 25.0 ±1.5 27.0 ±0.9 21.0 ±0.8 46.3 ±12.6 23.8 ±0.8 26.7 MediTron-7B 28.4 ±2.2 31.9 ±2.9 31.9 ±3.0 21.1 ±3.6
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
28d84795-0ee1-4b34-bc08-f88a75e2f866
# Biomistral: A Collection Of Open-Source Pretrained Large Language Models For Medical Domains ## H Multilingual Results aMA 7B 24.0 ±1.7 24.2 ±3.4 30.6 ±3.3 17.5 ±1.8 27.7 ±2.5 25.0 ±1.5 27.0 ±0.9 21.0 ±0.8 46.3 ±12.6 23.8 ±0.8 26.7 MediTron-7B 28.4 ±2.2 31.9 ±2.9 31.9 ±3.0 21.1 ±3.6 28.1 ±3.0 23.3 ±1.9 29.2 ±1.6 21.6 ±1.1 53.0 ±1.0 29.8 ±1.1 29.8 BioMedGPT-LM-7B 35.5 ±4.9 34.8 ±5.5 35.0 ±4.4 31.7 ±5.6 34.2 ±4.2 32.7 ±3.0 30.0 ±2.3 24.7 ±1.8 58.1 ±2.0 28.6 ±1.0 34.5 GPT-3.5 Turbo 1106 58.6 ±0.2 57.0 ±1.4 52.9 ±0.3 53.6 ±0.9 52.8 ±0.3 50.0 ±1.4 43.8 ±0.2 37.5 ±0.5 60.6 ±0.5 41.9 ±0.2 50.9
{ "creation_datetime": "2024-03-04", "file_name": "2402.10373v1.md", "file_path": "paper_data/2402.10373v1.md", "file_size": 79931, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b6d26453-93c8-41b5-b191-3a546da3cb93
## Premise Order Matters In Reasoning With Large Language Models Xinyun Chen1 ***, Ryan A. Chi**1 2 *, Xuezhi Wang1 **and Denny Zhou**1 *Equal contribution, 1Google DeepMind, 2Stanford University {xinyunchen,xuezhiw,dennyzhou}@google.com, ryanchi@cs.stanford.edu Large language models (LLMs) have accomplished remarkable reasoning performance in various domains. However, in the domain of reasoning tasks, we discover a frailty: LLMs are surprisingly brittle to the ordering of the premises, despite the fact that such ordering does not alter the underlying task. In particular, we observe that LLMs achieve the best performance when the premise order aligns with the context required in intermediate reasoning steps. For example, in deductive reasoning tasks, presenting the premises in the same order as the ground truth proof in the prompt (as opposed to random ordering) drastically increases the model's accuracy. We first examine the effect of premise ordering on deductive reasoning on a variety of LLMs, and our evaluation shows that permuting the premise order can cause a performance drop of over 30%. In addition, we release the benchmark R-GSM, based on GSM8K, to examine the ordering effect for mathematical problem-solving, and we again observe a significant drop in accuracy, relative to the original GSM8K benchmark.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dfd2e15a-169e-4e70-a1ec-fdf0b407aa38
## 1. Introduction Large language models (LLMs) have demonstrated impressive performance across a variety of reasoning tasks (Austin et al., 2021; Chen et al., 2021; Cobbe et al., 2021; Hendrycks et al., 2021; Wei et al., 2022). In particular, recent state-of-the-art LLMs have reached or even surpassed human performance on multiple reasoning benchmarks, including STEM problem-solving and code generation (Bubeck et al., 2023; Gemini, 2023; Li et al., 2022). However, recent works show that LLMs exhibit failure modes that align with human-like cognitive bias (Berglund et al., 2023; Hagendorff et al., 2023; Jones and Steinhardt, 2022; McCoy et al., 2023; Shi et al., 2023). For example, Berglund et al. (2023) revealed the *Reversal Curse*; i.e., LLMs trained on "A is B" tend to fail to infer that "B is A." Distractibility is another failure mode (Jones and Steinhardt, 2022; Shi et al., 2023), where the LLM performance drastically decreases when irrelevant context is included in the task description. In this work, we investigate the effect that premise order has on LLM reasoning. Specifically, in deductive reasoning, changing the order of premises alone does not change the conclusion. Consider the following illustrative example: 1. If 𝐴 then 𝐵. 2. If 𝐵 then 𝐶. 3. 𝐴 is True. We can derive that 𝐶 is True regardless of the order of these 3 premises. While some studies show that humans have a preference on the premise order to facilitate their reasoning (Dekeyser et al., 2000; Girotto et al., 1997), the premise order does not drastically affect human performance, especially for problems that only involve *modus ponens* (if P then Q; P; therefore Q), which are relatively straightforward for humans. In contrast to humans, we observe that for LLMs, the premise order has a significant impact on reasoning performance. In particular, LLMs reach the best performance when the premises are arranged **in the same order** as they appear in the ground-truth proof. Taking the illustrative problem above as an example, we observe two phenomena: 1. Presenting "
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
207cd670-9cb0-4bad-9650-007839482a00
## 1. Introduction 2000; Girotto et al., 1997), the premise order does not drastically affect human performance, especially for problems that only involve *modus ponens* (if P then Q; P; therefore Q), which are relatively straightforward for humans. In contrast to humans, we observe that for LLMs, the premise order has a significant impact on reasoning performance. In particular, LLMs reach the best performance when the premises are arranged **in the same order** as they appear in the ground-truth proof. Taking the illustrative problem above as an example, we observe two phenomena: 1. Presenting "If A then B" before "If B then C" in the prompt generally achieves a higher accuracy compared to the reversed order. 2. The performance gap is more significant when the number of premises increases. Intuitively, such a preference on the premise order aligns with human preference (Dekeyser et al., 2000) because in the preferred order, each derivation step can be done on-the-fly while looking at premises one by one, without needing to look back and forth across all premises at each step. We conduct a systematic study on the premise order effect using a variety of SoTA LLMs, including GPT-4-turbo, GPT-3.5-turbo (OpenAI, 2023), PaLM 2-L (Google, 2023), and Gemini Pro (Gemini, 2023). Our primary focus is deductive reasoning, and we benchmark all LLMs on problems that only involve *modus ponens* (if P then Q; P; therefore Q), where all LLMs in our evaluation at least achieve decent performance with a small number of premises. We show that the accuracy decrease caused by different ordering can be more than 30%. The ordering effect is further amplified when irrelevant premises (i.e., premises that are not needed to derive a conclusion) are presented in the prompt. Figure 1 illustrates a failure case, where all LLMs fail to generate the proof after changing the order of relevant rules. Interestingly, while all LLMs perform best when the premise order follows the ground truth proof, they reveal different preferences on other alternative orderings. Specifically, compared to randomly ordering the premises, GPT-4-turbo and GPT-3.5-turbo generally achieve better performance when the premise order is exactly the reverse of the ground truth proof, which enables LLMs to
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b4128d12-b4a4-430f-9ef8-485b04763b9d
## 1. Introduction is further amplified when irrelevant premises (i.e., premises that are not needed to derive a conclusion) are presented in the prompt. Figure 1 illustrates a failure case, where all LLMs fail to generate the proof after changing the order of relevant rules. Interestingly, while all LLMs perform best when the premise order follows the ground truth proof, they reveal different preferences on other alternative orderings. Specifically, compared to randomly ordering the premises, GPT-4-turbo and GPT-3.5-turbo generally achieve better performance when the premise order is exactly the reverse of the ground truth proof, which enables LLMs to perform derivation via backward chaining. On the other hand, PaLM 2-L generally achieves the **worst performance** with such a reversed order. Besides logical reasoning, we construct R-GSM to further investigate the ordering effect on mathematical reasoning. Specifically, we build R-GSM on top of a subset of GSM8K experiments, where we change the order of sentences in the problem description and manually verify that the ground truth answer remains the same. Our experiments again show that the performance of all LLMs notably drop, especially on longer problems that require more reasoning steps. Our evaluation highlights that even in reasoning domains where the premise order **does not matter**, premise order **does matter in LLM reasoning**. Specifically, the premise ordering effect indicates that LLMs are more comfortable reasoning via reading left-to-right instead of back-and-forth, which can be attributed to the auto-regressive model design or the reasoning bias learned from the training corpus. We leave proposing new training and modeling techniques to mitigate the premise order effect as future work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
01e6442a-8d5a-430a-b6ed-cb26697cfcf4
## 2. Benchmarks 2.1. Logical Reasoning Prior work has revealed the weaknesses of LLMs in logical reasoning (Han et al., 2022; Saparov and He, 2022; Saparov et al., 2023; Wan et al., 2024; Xu et al., 2023), especially when the proof is long and requires the knowledge of multiple deduction theorems. To isolate the effect of premise orders, we focus on a confined problem space adapted from SimpleLogic (Zhang et al., 2022), which only includes propositional logic problems with definite clauses. Specifically, each problem includes: (1) a set of facts 𝐴1,*. . .*, 𝐴𝑛 that hold true; (2) a set of rules of the form "If 𝑋, then 𝑌", "If 𝑋0 and 𝑋1, then 𝑌", or "If 𝑋0 and 𝑋1 and 𝑋2, then 𝑌"; and (3) a conclusion "𝐶 is True" to be proved. As opposed to SimpleLogic - which formulates the problem as a binary classification task (i.e., indicate whether the conclusion is True or False) - in our benchmark, every problem has a ground-truth label of True, and we consider the prediction to be correct only when the generated proof is completely valid. With these strict criteria, the LLM is required to produce the step-by-step deduction that leads to the conclusion, and any hallucination of non-existent facts and rules is considered erroneous. The key characteristic of our benchmark is that for each logical reasoning problem, we synthetically generate variants with **different premise orders.** Specifically, we denote the order that conforms to the ground truth proof with forward chaining as the *forward* order, where the rule applied in each derivation step is sequentially presented in the problem description. Intuitively, presenting premises in the forward order simplifies the problem for humans, as this allows us to write the proof on-the-fly while reading the premises. Conversely, a premise ordering that is more random increases the task difficulty, since carrying out the derivation requires us to repetitively look for premises for each reasoning step. Motivated by this intuition, we categorize different premise orders based on their Kendall tau distance 𝜏 (Cicirello, 2019; Sen, 1968) to the forward order, normalized
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0da4bd61-dfd3-402b-9a1d-8b198e14a59d
## 2. Benchmarks 2.1. Logical Reasoning , where the rule applied in each derivation step is sequentially presented in the problem description. Intuitively, presenting premises in the forward order simplifies the problem for humans, as this allows us to write the proof on-the-fly while reading the premises. Conversely, a premise ordering that is more random increases the task difficulty, since carrying out the derivation requires us to repetitively look for premises for each reasoning step. Motivated by this intuition, we categorize different premise orders based on their Kendall tau distance 𝜏 (Cicirello, 2019; Sen, 1968) to the forward order, normalized into the range [−1, 1]. Specifically, 𝜏 = 1 is the *forward* order, and we denote the order with 𝜏 = −1 as the backward order, which is the reverse of the forward order and aligns with the proof via backward chaining. 𝜏 ≈ 0 suggests that there is no strong correlation between the premise order in the problem description and the proof. To thoroughly investigate the LLM preference on different premise orders, we evaluate the model performance on 𝜏 = 0.5, 0 and −0.5, in addition to the forward (𝜏 = 1) and backward (𝜏 = −1) orders. We present examples with 𝜏 = 1 and 0 in Figure 1, and defer examples with other 𝜏 values to Figure 11 in Appendix B. We measure the premise order effect by varying the following two factors: - **Number of rules required in the proof.** It is expected that the premise order effect is more significant with more rules. For our benchmark, we generate problems whose numbers of rules range from 4 to 12. - **Number of distracting rules** (i.e., rules that are not useful for the proof) presented in the problem. The presence of distracting rules also complicates the problem, as premise selection itself is challenging (Ferreira and Freitas, 2020; Irving et al., 2016; Wang et al., 2017), and LLMs are shown to be easily distracted by irrelevant context (Shi et al., 2023). We include problem variants with 0, 5 and 10 distracting rules. We generate 200 problems for each number of required rules. Considering different premise orders and numbers of distracting rules, each problem includes 15 variants, resulting in a total of 27K problems in our benchmark.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1cd3cb56-55cc-4ea7-a4ba-bb5cae135044
## 2.2. R-Gsm For Mathematical Reasoning Figure 2 | R-GSM example where the original problem can be correctly solved by all LLMs in our evaluation, but all of them failed on the reordered one. Different calculation steps and their corresponding problem statements are annotated in light blue. Specifically, the reasoning steps of the original problem follows the ordering of problem statements, while the reordered problem does not. To further assess the effect of premise orders beyond logical reasoning, we construct the R-GSM dataset based on GSM8K (Cobbe et al., 2021), which is a popular benchmark of grade school math word problems. Specifically, we first select GSM8K test problems with at least 5 sentences in the problem description, then filter out those problems where there is no alternative ordering that does not change the ground truth answer, e.g., problem statements that follow the causal order of an event series. For each of the remaining problem, we keep the last sentence untouched and rewrite the problem description with a different ordering of other sentences. Minor editing on words is allowed to ensure the grammatical correctness of the problem description. To facilitate the annotation process, for each problem, we write a simple function to enumerate all alternative orderings of problem statements until an ordering that causes the LLM prediction failure is discovered, which can be used for our manual rewriting if the alternative ordering found in the enumeration process happens to preserve the ground truth answer. In total, our R-GSM benchmark contains 220 pairs of problems, including both the original GSM8K problem description and the manually rewritten one with a different ordering of problem statements. Despite that over 60% of problems in R-GSM only have 5 sentences, and all problems have at most 8 sentences, our evaluation shows that all LLMs still perform considerably worse on rewritten problems. Figure 2 presents an example in R-GSM where all LLMs correctly solve the original problem but not the rewritten one. Specifically, the reasoning steps for the original problem follows the ordering of problem statements, while for the rewritten problem, the second calculation step in the correct solution should refer to the second-to-last sentence instead of the second sentence in the problem description. We provide a more detailed case study in Section 3.3, and present the full dataset statistics in Appendix A.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7b43171d-3688-45f8-83a9-7e197a2c0d8d
## 3. Experiments 3.1. Experimental Setup We evaluate the premise ordering effect on GPT-4-turbo, GPT-3.5-turbo, PaLM 2-L and Gemini Pro. We perform the greedy decoding with the temperature 0, and apply the zero-shot prompting in all experiments. On R-GSM, the model input only contains the problem description without additional instructions. For logical reasoning, as shown in Figure 1, we add an instruction in the prompt to ask for a derivation that specifies which premise is used in each step.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
24a90e62-0683-4519-bc24-e9e3ed720272
## 3.2. Logical Reasoning Figure 3 presents the results with different numbers of relevant rules included in ground truth proofs, where the problem does not contain distracting rules, and the shuffled accuracy is the aggregation of results with 𝜏 = 0.5, 0 and -0.5. Across different LLMs, the forward order consistently achieves the best performance, which aligns with the human preference. The performance drop caused by alternative orderings becomes more significant when the number of rules increases. Meanwhile, models with weaker reasoning capabilities are also more sensitive to different premise orders. Specifically, while the accuracy decrease of GPT-4-turbo and PaLM 2-L is up to 20−30%, with Gemini-Pro and GPT-3.5-turbo, changing the premise order from the forward order can degrade the accuracy from over 65% to below 25%, with an accuracy decrease of more than 40%. Breakdown on different premise orders. We present the results of fine-grained breakdown on premise ordering in Figure 5, where the orders are categorized based on Kendall tau distance 𝜏 as described in Section 2.1. Interestingly, while the top preference of all LLMs is the forward order, their preferences on other orders are not alike. Specifically, GPT-4-turbo generally prefers the backward order over other orders, and the overall performance decreases with a smaller absolute value of 𝜏. This observation is also consistent with the human reasoning pattern, as backward chaining is another well-established inference method. On the other hand, PaLM 2-L generally performs the worst with the backward order. With the decrease of 𝜏 (i.e., the premise order deviates more from the forward order), the accuracy drops. The preferences of Gemini Pro and GPT-3.5-turbo are less consistent, still they prefer the backward order more often than other non-forward premise orders. Effect of distracting rules. We assess the effect of distracting rules of GPT-4-turbo and PaLM 2-L, which reach a decent performance without the presence of distracting rules. Figures 4 and 6 show that adding distracting rules further decreases the reasoning performance and magnifies the effect of different premise orders. Still, the overall preferences of both LLMs remain the same as the scenario without distracting rules. Specifically, both LLMs again achieve the best performance with the forward order, and GPT-4-turbo prefers the backward order over other non
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7ab93ad3-632f-483b-9880-53b89cfffb9a
## 3.2. Logical Reasoning the backward order more often than other non-forward premise orders. Effect of distracting rules. We assess the effect of distracting rules of GPT-4-turbo and PaLM 2-L, which reach a decent performance without the presence of distracting rules. Figures 4 and 6 show that adding distracting rules further decreases the reasoning performance and magnifies the effect of different premise orders. Still, the overall preferences of both LLMs remain the same as the scenario without distracting rules. Specifically, both LLMs again achieve the best performance with the forward order, and GPT-4-turbo prefers the backward order over other non-forward orders, while PaLM 2-L performance decreases with a smaller 𝜏. Error analysis. In Table 1, we present the breakdown on prediction errors with different premise orders. We consider the following error categories: 1. *wrong refutation*: the LLM wrongly claims that the conclusion can not be proved; 2. *rule hallucination*: the LLM generates rules that do not exist in the problem; 3. *fact hallucination*: the LLM generates facts that do not exist in the problem and are unproven. We observe that for all LLMs, fact hallucination is typically the most common error pattern, and this error type escalates dramatically with the decrease of 𝜏. The main reason is that LLMs are inclined to use the rules in the sequential order as they present in the problem, so when the next rule in the problem is not yet applicable, LLMs might still hallucinate facts to complete the proof step. Simultaneously, we observe that the percentage of wrong refutation is generally lower for 𝜏 = −1 than for |𝜏| < 1. We present an example of wrong refutation in Figure 1, and we include more examples of rule and fact hallucination in Figure 10 of Appendix B. 𝜏 Correct Wrong Hallucination Refutation Rule Fact GPT-4-turbo 1 96.5% 0.5% 1.5% 1.5% 0.5 76.0% 10.5% 2.0% 11.5% 0 82.0% 4.5% 3.5% 10.0% -0.5 84.5% 1.0%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7802663a-d2f6-4ffa-9d0b-90c34b335a60
## 3.2. Logical Reasoning more examples of rule and fact hallucination in Figure 10 of Appendix B. 𝜏 Correct Wrong Hallucination Refutation Rule Fact GPT-4-turbo 1 96.5% 0.5% 1.5% 1.5% 0.5 76.0% 10.5% 2.0% 11.5% 0 82.0% 4.5% 3.5% 10.0% -0.5 84.5% 1.0% 4.5% 10.0% -1 84.0% 0.0% 3.5% 12.5% GPT-3.5-turbo 1 30.0% 24.5% 9.5% 35.5% 0.5 1.0% 54.5% 9.5% 33.0% 0 0.5% 55.0% 7.5% 34.5% -0.5 2.0% 50.0% 8.5% 37.5% -1 1.5% 34.5% 14.5% 47.0% PaLM 2-L 1 88.0% 0.5% 3.0% 8.5% 0.5 74.5% 1.5% 9.5% 14.5% 0 65.5% 2.0% 11.0% 21.5% -0.5 59.5% 1.5% 10.0% 29.0% -1 57.5% 1.0% 11.5% 30.0% Gemini Pro 1 16.5% 28.0% 5.0% 50.5% 0.5 0.0% 59.0% 3.5% 37.5% 0 0.0% 34.0% 9.0% 57.0% -0.5 0.5% 24.5% 9.5% 65.5% -1 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5cf91105-92f7-48b2-8974-0c3132dec9af
## 3.2. Logical Reasoning 0% -1 57.5% 1.0% 11.5% 30.0% Gemini Pro 1 16.5% 28.0% 5.0% 50.5% 0.5 0.0% 59.0% 3.5% 37.5% 0 0.0% 34.0% 9.0% 57.0% -0.5 0.5% 24.5% 9.5% 65.5% -1 0.5% 27.5% 11.5% 60.5%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
75dcbf2b-1235-4b0b-b522-4f96a45c0f5f
## 3.3. R-Gsm For Mathematical Reasoning | Init Acc | Reorder Acc | Init Acc | |---------------|---------------|------------| | GPT-4-turbo | 94.1% | 85.0% | | PaLM 2-L | 86.4% | 79.5% | | Gemini Pro | 80.5% | 69.1% | | GPT-3.5-turbo | 67.3% | 51.8% | | GPT-4-turbo | 100% | 89.9% | | PaLM 2-L | 100% | 87.9% | | Gemini Pro | 100% | 74.6% | | GPT-3.5-turbo | 100% | 64.9% | | (a) | (b) | | Table 2a demonstrates the overall results on R-GSM. Again, all LLMs achieve a lower performance on R-GSM. Note that the original GSM8K problems are not necessarily written in the most preferable way, and thus sometimes the manual rewriting facilitates the reasoning and allows the model to correctly solve the reordered version of a problem that it fails on the original one. Therefore, in Table 2b, for each LLM, we also present the accuracy on those problems with their original descriptions solved by the model. We show that all LLMs fail on at least 10% of reordered problems that they are initially able to solve, and this performance degradation is more than 35% with GPT-3.5-turbo. Breakdown of problem complexity. Figures 7 and 8 present the breakdown results on
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a95e45c-3e6c-4b65-9188-efbfbc2543ca
## 3.3. R-Gsm For Mathematical Reasoning in the most preferable way, and thus sometimes the manual rewriting facilitates the reasoning and allows the model to correctly solve the reordered version of a problem that it fails on the original one. Therefore, in Table 2b, for each LLM, we also present the accuracy on those problems with their original descriptions solved by the model. We show that all LLMs fail on at least 10% of reordered problems that they are initially able to solve, and this performance degradation is more than 35% with GPT-3.5-turbo. Breakdown of problem complexity. Figures 7 and 8 present the breakdown results on different number of reasoning steps and different number of problem sentences, respectively. Unsurprisingly, across all LLMs, the proof accuracy suffers on problems that require more reasoning steps and contain a greater number of sentences. Overall, the gap between the accuracies on initial and rewritten problems is more significant with more reasoning steps and longer problems for both GPT-4-turbo and Gemini Pro, while the gap remains similar across different numbers of reasoning steps and problem lengths for PaLM 2-L and GPT-3.5-turbo. Error analysis. To further understand the failure modes, for each LLM, we analyze those error cases where the original problems can be correctly solved but not the reordered ones, and we categorize the common error types in Table 3. Similar to our observation in logical reasoning experiments, the prediction errors in R-GSM are primarily due to the LLMs blindly using numbers in the sequential order of their appearances in the problem. Specifically, the most common error case for all LLMs is their tendency to overlook temporal order. Figure 2 presents such an example, where the prediction failure is because some earlier events are described in the later part of the problem. Another category of errors occurs when some quantities are not specified while processing the problem in the sequential order, which introduces unknown variables for calculation. Take, for example, the problem in Figure 9. In the original problem, the number of each animal can be directly calculated based on its preceding sentence. However, in the reordered problem, the number of gerbils cannot directly be computed | | Temporal | Unknown | Others | |---------------|------------|-----------|----------| | GPT-4-turbo | 45.0% | 15.0
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
abae04be-0fbd-4d19-827a-7eff2a91b729
## 3.3. R-Gsm For Mathematical Reasoning in the sequential order, which introduces unknown variables for calculation. Take, for example, the problem in Figure 9. In the original problem, the number of each animal can be directly calculated based on its preceding sentence. However, in the reordered problem, the number of gerbils cannot directly be computed | | Temporal | Unknown | Others | |---------------|------------|-----------|----------| | GPT-4-turbo | 45.0% | 15.0% | 40.0% | | GPT-3.5-turbo | 21.6% | 19.6% | 58.8% | | PaLM 2-L | 34.8% | 4.3% | 60.9% | | Gemini Pro | 29.5% | 18.2% | 52.3% | based on the preceding sentences, since the number of fish remains unknown up to that point, and the LLM must read the remaining sentences and calculate the number of fish first. However, the prediction from GPT-3.5-turbo instead uses the number calculated in the previous step (i.e., the number of rabbits) to calculate the number of gerbils, resulting in an error. Such a failure mode is less common with PaLM 2-L, but still constitutes a non-negligible proportion of prediction errors for the other LLMs. We present more examples of model predictions in Appendix C.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
243cf3f0-dd22-4be0-9b5e-a0f886800758
## 4. Related Work Failure modes of LLMs. The premise order effect in this work is connected to several failure modes of LLMs in the literature, including the reversal curse (Berglund et al., 2023), distractibility (Shi et al., 2023), and limited capability of logical reasoning (Han et al., 2022; Saparov and He, 2022; Saparov et al., 2023; Wan et al., 2024; Xu et al., 2023; Zhu et al., 2023). Specifically, Shi et al. (2023) show that including irrelevant context in the problem statement leads to a considerable performance drop on GSM8K and other reasoning benchmarks, revealing that LLMs are *distractible*. This finding is in-line with our evaluation on logical reasoning, where we observe that adding irrelevant rules not only degrades the overall logical reasoning performance, but also escalates the premise order effect. The *Reversal Curse* (Berglund et al., 2023) unveils another perspective of the order effect, where they show that an LLM that recognizes "A is B" does not necessarily learn that "B is A." While their work studies the order effect between two entities within a single factual statement, our work focuses on reasoning problems with multiple premises, without restrictions on the number of (or relationship between) entities. In particular, for logical reasoning, we demonstrate that random permutations of premises often result in **worse** accuracy than the purely backward order. Order effect for human logical reasoning. Although the premise order does not matter in deductive reasoning, several studies show that the premise order can impact the human reasoning performance (Dekeyser et al., 2000; Girotto et al., 1997). Dekeyser et al. (2000) described co-reference as a human preference of premise order; i.e., humans prefer the premises to be presented in an order where they can draw immediate conclusions after seeing each one. In this work, we show that LLMs also have such a preference, and they achieve the best performance when the ordering of rules follows the ground truth proof. Girotto et al. (1997) studied how the premise order affects logical reasoning for humans, and found that the premise order has a significant effect in solving modus tollens problems (i.e., if P, then Q; not Q; therefore, not P), but not *modus ponens* problems (i.e., if P, then Q;
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e0e89c6a-1aa0-4f4c-85a7-495cc37f50a8
## 4. Related Work premises to be presented in an order where they can draw immediate conclusions after seeing each one. In this work, we show that LLMs also have such a preference, and they achieve the best performance when the ordering of rules follows the ground truth proof. Girotto et al. (1997) studied how the premise order affects logical reasoning for humans, and found that the premise order has a significant effect in solving modus tollens problems (i.e., if P, then Q; not Q; therefore, not P), but not *modus ponens* problems (i.e., if P, then Q; P; therefore, Q). However, differing from our work, they studied the influence of different ordering between rules and facts, e.g., their experiments on *modus tollens* problems show that presenting negation statements (not Q) before rules (if P, then Q) improves the performance over the reverse order. On the other hand, our work focuses on *modus ponens* problems that are easier for both humans and LLMs, and we show that the LLM performance is still quite sensitive to the ordering of the premises. Order effect of language models. Some prior works show that language models are able to understand permuted texts to some extent, i.e., after a random permutation of words, models usually preserve a reasonable performance (Abdou et al., 2022; Sinha et al., 2020). Moreover, Cao et al. (2023) shows that even when a large fraction of words are scrambled, GPT-4 still achieves decent performance on several reasoning benchmarks. In contrast to permuted texts in these works that are typically unnatural and nonsensical, our premise order permutations do not alter the semantic meaning and remain syntactically valid (we manually verify this). Nevertheless, we demonstrate that LLM reasoning performance is highly brittle to the ordering of the premises.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8e054cd5-5e95-4c7f-b40b-77a36e9dcb63
## 5. Conclusion In this work, we show that the premise order significantly affects LLMs' performance on reasoning tasks, even when the premise order does not change the underlying task itself. Our comprehensive evaluation demonstrates that LLM tendencies resemble human preference w.r.t. premise order, i.e., LLMs achieve the best performance when the premise order follows the intermediate reasoning steps to solve the problem. Conversely, LLMs face difficulties when the reasoning problem requires the model to read the problem description back-and-forth, resulting in a performance drop of over 30%. We further extend the study to mathematical reasoning and present the R-GSM benchmark, and again experimentally confirm the ordering effect. While humans also have a preference of premise orders for reasoning problems, LLMs are much more susceptible to such ordering effects. We can attempt to ascribe the premise order effect to several candidate factors, such as the auto-regressive model design, training objectives, and training data mixture. However, we leave proposing theoretical explanations of this limitation and developing new techniques towards addressing the premise order effect as future work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d64cc574-b707-4c45-9291-e1177059cb76
## Acknowledgment We would like to thank Chen Liang and Dale Schuurmans for helpful discussion and feedback.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2ccf25b7-217b-4907-bd71-b82602236c6b
## A. R-Gsm Dataset Statistics Table 4 presents the statistics of our R-GSM benchmark. | # Steps | # Problems | |-------------|--------------| | # Sentences | # Problems | | 5 | 133 | | 6 | 65 | | 7 | 19 | | 8 | 3 | | 2 | 20 | | 3 | 43 | | 4 | 65 | | 5 | 43 | | 6 | 23 | | 7 | 15 | | 8 | 11 | | (a) | (b) |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3bde0b58-bb7d-46de-a56a-ade0bf33a879
## B. Logical Reasoning Examples Figure 10 presents common classes of errors - hallucinated rules and facts - by LLMs while solving our logical reasoning benchmark. Figure 11 presents a sample logical reasoning problem with premise orders of different 𝜏 values. We can see that the rules become less ordered when the absolute value of 𝜏 decreases.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7b9684e5-801a-432a-b996-12a535b3666c
## C. R-Gsm Examples In this section, we present more examples of LLM predictions on R-GSM problems. Figure 12 presents a failure case of a probability problem, which falls into the "Others" category in the error analysis (Table 3). Specifically, in the reordered problem, after the LLM reads the sentence about the scenario with a normal teacher coming in, the LLM immediately attempts to compute the probability that Marcus has to turn in his homework, ignoring that the LLM needs to compute the probability that a normal teacher will come in using the next sentence. Figures 13 shows another wrong prediction of GPT-4 Turbo, where the error pattern is analogous to rule hallucination in logical reasoning evaluation. Interestingly, when moving the sentence about yellow cars preceding to the sentence about quantities of blue and green cars, GPT-4 Turbo starts to hallucinate the relationship between the number of yellow cars and the number of blue cars, resulting in insufficient information to correctly solve the problem. Figures 14 and 15 present examples where both the original and reordered problems are correctly solved by LLMs in our evaluation. In both original problems, the succeeding sentences do not strongly depend on the preceding sentences.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2ef266d2-4dc9-471d-b47b-15887bc3c5c3
## D. Full Results For Logical Reasoning Tables 5 and 8 present the accuracy numbers for Figures 3 and 5, which are results on different numbers of relevant rules without distracting rules. Tables 6 and 9 present the accuracy numbers for Figures 4 and 6 with 5 distracting rules. Tables 7 and 10 present the accuracy numbers for Figures 4 and 6 with 10 distracting rules.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a079458-11f4-42e8-a881-b1245f9b239a
## E. Full Results On R-Gsm Tables 11 and 12 present the accuracy numbers for Figures 7 and 8, which are breakdown results on R-GSM problems with different numbers of reasoning steps and different numbers of sentences in the problem description respectively. | # Rules | Order | Acc | |------------------|---------|-------| | 4 | | | | Forward | 100% | | | Backward | 100% | | | Shuffled | 99.8% | | | 5 | | | | Forward | 100% | | | Backward | 100% | | | Shuffled | 99.5% | | | 6 | | | | Forward | 100% | | | Backward | 100% | | | Shuffled | 99.3% | | | 7 | | | | Forward | 99.5% | | | Backward | 98.5% | | | Sh
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4ea22d5e-3506-4135-8985-1ff17a43022b
## E. Full Results On R-Gsm | | | Shuffled | 99.3% | | | 7 | | | | Forward | 99.5% | | | Backward | 98.5% | | | Shuffled | 97.8% | | | 8 | | | | Forward | 99.5% | | | Backward | 98.5% | | | Shuffled | 95.8% | | | 9 | | | | Forward | 99.0% | | | Backward | 95.5% | | | Shuffled | 95.3% | | | 10 | | | | Forward | 99.0% | | | Backward | 97.0% | | | Shuffled | 91.0% | | | 11
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
629fe66a-c07c-4a68-a244-5397936701ba
## E. Full Results On R-Gsm | | 10 | | | | Forward | 99.0% | | | Backward | 97.0% | | | Shuffled | 91.0% | | | 11 | | | | Forward | 98.5% | | | Backward | 95.5% | | | Shuffled | 91.5% | | | 12 | | | | Forward | 98.0% | | | Backward | 86.5% | | | Shuffled | 84.2% | | | (a) GPT-4-turbo. | | | | # Rules | Order | Acc | | 4 | | | | Forward | 93.0% | | | Backward | 73.5% | | | Shuffled | 77.0%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fdfed474-7f49-4a85-afc7-e19fbc5078a7
## E. Full Results On R-Gsm Rules | Order | Acc | | 4 | | | | Forward | 93.0% | | | Backward | 73.5% | | | Shuffled | 77.0% | | | 5 | | | | Forward | 90.0% | | | Backward | 58.0% | | | Shuffled | 57.0% | | | 6 | | | | Forward | 87.5% | | | Backward | 77.5% | | | Shuffled | 72.0% | | | 7 | | | | Forward | 65.5% | | | Backward | 25.0% | | | Shuffled | 22.5% | | | 8 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7ed46296-d0c2-47da-89e1-dee2294d8119
## E. Full Results On R-Gsm | | | | Forward | 65.5% | | | Backward | 25.0% | | | Shuffled | 22.5% | | | 8 | | | | Forward | 50.0% | | | Backward | 17.5% | | | Shuffled | 12.5% | | | 9 | | | | Forward | 47.5% | | | Backward | 11.5% | | | Shuffled | 8.7% | | | 10 | | | | Forward | 34.0% | | | Backward | 4.5% | | | Shuffled | 2.5% | | | 11 | | | | Forward | 33.0% |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6f40999a-cbd8-4593-9a4e-520a60bd9ed8
## E. Full Results On R-Gsm | 34.0% | | | Backward | 4.5% | | | Shuffled | 2.5% | | | 11 | | | | Forward | 33.0% | | | Backward | 2.0% | | | Shuffled | 1.5% | | | 12 | | | | Forward | 16.5% | | | Backward | 0.5% | | | Shuffled | 0.2% | | (c) Gemini Pro. | # Rules | Order | Acc | |---------------|---------|-------| | 4 | | | | Forward | 99.0% | | | Backward | 99.5% | | | Shuffled | 98.8% | | | 5 | | | | Forward | 98.5% | | | Backward
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ccc3909b-4d87-4a29-8c92-f1c9446aea5e
## E. Full Results On R-Gsm | 99.0% | | | Backward | 99.5% | | | Shuffled | 98.8% | | | 5 | | | | Forward | 98.5% | | | Backward | 99.5% | | | Shuffled | 98.2% | | | 6 | | | | Forward | 100% | | | Backward | 100% | | | Shuffled | 98.3% | | | 7 | | | | Forward | 99.0% | | | Backward | 98.0% | | | Shuffled | 97.0% | | | 8 | | | | Forward | 99.0% | | | Backward | 95.5% | | | Shuffled | 93.5% | | | 9 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2d667eb6-6890-4863-b4aa-bf33bddca533
## E. Full Results On R-Gsm | | | | Forward | 99.0% | | | Backward | 95.5% | | | Shuffled | 93.5% | | | 9 | | | | Forward | 98.5% | | | Backward | 95.5% | | | Shuffled | 93.5% | | | 10 | | | | Forward | 99.0% | | | Backward | 92.5% | | | Shuffled | 87.3% | | | 11 | | | | Forward | 98.5% | | | Backward | 91.0% | | | Shuffled | 87.5% | | | 12 | | | | Forward | 96.5% | | | Backward | 84.0% | | | Shuffled | 80.8% |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7d8f8f24-b81a-4ab4-8517-4575331d64d3
## E. Full Results On R-Gsm | | Shuffled | 87.5% | | | 12 | | | | Forward | 96.5% | | | Backward | 84.0% | | | Shuffled | 80.8% | | | (b) PaLM 2-L. | | | | # Rules | Order | Acc | | 4 | | | | Forward | 88.5% | | | Backward | 70.0% | | | Shuffled | 71.8% | | | 5 | | | | Forward | 84.0% | | | Backward | 55.0% | | | Shuffled | 51.7% | | | 6 | | | | Forward | 87.5% | | | Backward | 67.0% | | | Shuffled | 62.0% | | | 7 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fd913baf-7fc6-4033-bcc9-3ae472851a1d
## E. Full Results On R-Gsm | | 6 | | | | Forward | 87.5% | | | Backward | 67.0% | | | Shuffled | 62.0% | | | 7 | | | | Forward | 64.0% | | | Backward | 23.0% | | | Shuffled | 20.2% | | | 8 | | | | Forward | 56.5% | | | Backward | 15.5% | | | Shuffled | 13.0% | | | 9 | | | | Forward | 50.5% | | | Backward | 9.5% | | | Shuffled | 8.7% | | | 10 | | | | Forward | 37.0% | | | Backward | 3.5% | | | Sh
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e11f4f4f-c42a-4421-86bc-56389e389bd8
## E. Full Results On R-Gsm | 9.5% | | | Shuffled | 8.7% | | | 10 | | | | Forward | 37.0% | | | Backward | 3.5% | | | Shuffled | 3.5% | | | 11 | | | | Forward | 36.0% | | | Backward | 1.0% | | | Shuffled | 2.8% | | | 12 | | | | Forward | 30.0% | | | Backward | 1.0% | | | Shuffled | 1.2% | | (d) GPT-3.5-turbo. | # Rules | Order | Acc | |-----------|---------|-------| | # Rules | Order | Acc | | 4 | | | | Forward | 98.5% | | | Backward | 95.5% | | | Shuffled | 94.5% | | | 4
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5d123bf4-87e8-48c8-9644-460bc5f54db4
## E. Full Results On R-Gsm Rules | Order | Acc | |-----------|---------|-------| | # Rules | Order | Acc | | 4 | | | | Forward | 98.5% | | | Backward | 95.5% | | | Shuffled | 94.5% | | | 4 | | | | Forward | 98.0% | | | Backward | 99.5% | | | Shuffled | 99.0% | | | 5 | | | | Forward | 97.0% | | | Backward | 93.5% | | | Shuffled | 94.8% | | | 5 | | | | Forward | 99.5% | | | Backward | 98.5% | | | Shuffled | 98.0% | | | 6 | | | | Forward | 88.0% | | | Backward | 85.0% | | | Shuffled | 88.5% | | | 6 | | | | Forward
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6acab730-8f14-4a7a-a88a-0fb8421c1bfc
## E. Full Results On R-Gsm | | 6 | | | | Forward | 88.0% | | | Backward | 85.0% | | | Shuffled | 88.5% | | | 6 | | | | Forward | 97.5% | | | Backward | 97.0% | | | Shuffled | 96.7% | | | 7 | | | | Forward | 87.5% | | | Backward | 68.0% | | | Shuffled | 75.8% | | | 7 | | | | Forward | 93.5% | | | Backward | 92.0% | | | Shuffled | 90.2% | | | 8 | | | | Forward | 84.5% | | | Backward | 63.0% | | | Shuffled | 66.0% | | | 8 | | | | Forward | 89.5% | | | Backward | 85.5%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3d8fe51f-6420-4da6-8647-09fabb4f9561
## E. Full Results On R-Gsm | | Forward | 84.5% | | | Backward | 63.0% | | | Shuffled | 66.0% | | | 8 | | | | Forward | 89.5% | | | Backward | 85.5% | | | Shuffled | 82.2% | | | 9 | | | | Forward | 81.5% | | | Backward | 56.5% | | | Shuffled | 60.8% | | | 9 | | | | Forward | 88.0% | | | Backward | 84.0% | | | Shuffled | 82.7% | | | 10 | | | | Forward | 79.5% | | | Backward | 46.5% | | | Shuffled | 55.5% | | | 10 | | | | Forward | 89.0% | | | Backward | 77.0% | | | Shuffled | 74.2% | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4483845d-1964-4506-80e0-5be0d967c4a2
## E. Full Results On R-Gsm Backward | 46.5% | | | Shuffled | 55.5% | | | 10 | | | | Forward | 89.0% | | | Backward | 77.0% | | | Shuffled | 74.2% | | | 11 | | | | Forward | 73.0% | | | Backward | 43.5% | | | Shuffled | 42.5% | | | 11 | | | | Forward | 84.5% | | | Backward | 75.5% | | | Shuffled | 71.5% | | | 12 | | | | Forward | 80.5% | | | Backward | 72.5% | | | Shuffled | 57.2% | | | 12 | | | | Forward | 64.0% | | | Backward | 32.5% | | | Shuffled | 38.2% | | (a) GPT-4-turbo. (b) PaLM 2-L. | # Rules
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
538bb1ba-0668-44ab-aa55-254856700dcd
## E. Full Results On R-Gsm | | 12 | | | | Forward | 64.0% | | | Backward | 32.5% | | | Shuffled | 38.2% | | (a) GPT-4-turbo. (b) PaLM 2-L. | # Rules | Order | Acc | |-----------|---------|-------| | # Rules | Order | Acc | | 4 | | | | Forward | 97.5% | | | Backward | 95.0% | | | Shuffled | 96.3% | | | 4 | | | | Forward | 97.0% | | | Backward | 98.0% | | | Shuffled | 97.7% | | | 5 | | | | Forward | 94.0% | | | Backward | 91.0% | | | Shuffled | 92.5% | | | 5 | | | | Forward | 98.0% | | | Backward | 96.0% | | | Shuffled | 96.5% |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
58329582-5337-45f9-9e9b-73a2ddb79060
## E. Full Results On R-Gsm | | Backward | 91.0% | | | Shuffled | 92.5% | | | 5 | | | | Forward | 98.0% | | | Backward | 96.0% | | | Shuffled | 96.5% | | | 6 | | | | Forward | 89.0% | | | Backward | 77.0% | | | Shuffled | 79.7% | | | 6 | | | | Forward | 92.5% | | | Backward | 88.5% | | | Shuffled | 90.3% | | | 7 | | | | Forward | 71.5% | | | Backward | 55.0% | | | Shuffled | 60.7% | | | 7 | | | | Forward | 84.5% | | | Backward | 80.0% | | | Shuffled | 76.0% | | | 8 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f891fdc8-328b-451d-81f5-391028d1b08f
## E. Full Results On R-Gsm | 60.7% | | | 7 | | | | Forward | 84.5% | | | Backward | 80.0% | | | Shuffled | 76.0% | | | 8 | | | | Forward | 68.5% | | | Backward | 39.5% | | | Shuffled | 46.7% | | | 8 | | | | Forward | 81.5% | | | Backward | 76.5% | | | Shuffled | 70.5% | | | 9 | | | | Forward | 61.5% | | | Backward | 38.0% | | | Shuffled | 42.7% | | | 9 | | | | Forward | 73.0% | | | Backward | 65.0% | | | Shuffled | 62.8% | | | 10 | | | | Forward | 47.0% | | | Backward
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d65bfae3-65d9-4e00-a255-e863fadc2ed5
## E. Full Results On R-Gsm | | | Forward | 73.0% | | | Backward | 65.0% | | | Shuffled | 62.8% | | | 10 | | | | Forward | 47.0% | | | Backward | 29.5% | | | Shuffled | 30.7% | | | 10 | | | | Forward | 64.5% | | | Backward | 59.0% | | | Shuffled | 53.7% | | | 11 | | | | Forward | 46.5% | | | Backward | 15.5% | | | Shuffled | 25.0% | | | 11 | | | | Forward | 58.5% | | | Backward | 53.0% | | | Shuffled | 48.7% | | | 12 | | | | Forward | 57.5% | | | Backward | 46.5% | | | Shuffled | 40.0% |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b8527ea0-eaa0-4fb3-bac6-4ecc4e1e4683
## E. Full Results On R-Gsm | | Backward | 53.0% | | | Shuffled | 48.7% | | | 12 | | | | Forward | 57.5% | | | Backward | 46.5% | | | Shuffled | 40.0% | | | 12 | | | | Forward | 36.5% | | | Backward | 15.5% | | | Shuffled | 18.2% | | (a) GPT-4-turbo. (b) PaLM 2-L.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
85001284-7138-46d8-86f2-4d3c84a9922c
# Rules 𝜏 Acc 𝜏 Acc 8 8 1.0 99.0% 0.5 95.0% 0.0 91.0% -0.5 94.5% -1.0 95.5% 1.0 95.5% 0.5 89.5% 0.0 86.5% -0.5 87.0% -1.0 77.0% 10 10 1.0 99.0% 0.5 91.0% 0.0 82.5% -0.5 88.5% -1.0 92.5% 1.0 95.0% 0.5 84.0% 0.0 83.0% -0.5 76.0% -1.0 75.5% 11 11 1.0 98.5% 0.5 90.0% 0.0 84.5% -0.5 88.0% -1.0 91.0% 1.0 94.0% 0.5 80.5% 0.0 76.5% -0.5 79.0% -1.0 66.0% 12 12 1.0 96.5% 0.5 76.0% 0.0 82.0% -0.5 84.5% -1.0 84.0% 1.0 88.0% 0.5 74.5% 0.0 65.5% -0.5 59.5% -1.0 57.5% (a) GPT-4-turbo. (b) PaLM 2-L. 𝜏 Acc 𝜏 Acc 6 6 1.0 87.5% 0.5 68.5% 0.0 75.5% -0.5 72.0% -1.0 77.5% 1.0 87.5% 0.5 68.5%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a5cf604a-dbcc-4ad1-9ab9-dcf87fb5fdac
# Rules % -0.5 59.5% -1.0 57.5% (a) GPT-4-turbo. (b) PaLM 2-L. 𝜏 Acc 𝜏 Acc 6 6 1.0 87.5% 0.5 68.5% 0.0 75.5% -0.5 72.0% -1.0 77.5% 1.0 87.5% 0.5 68.5% 0.0 75.5% -0.5 72.0% -1.0 77.5% 8 8 1.0 50.0% 0.5 10.5% 0.0 12.0% -0.5 15.0% -1.0 17.5% 1.0 50.0% 0.5 10.5% 0.0 12.0% -0.5 15.0% -1.0 17.5% 10 10 1.0 34.0% 0.5 2.0% 0.0 3.5% -0.5 2.0% -1.0 4.5% 1.0 34.0% 0.5 2.0% 0.0 3.5% -0.5 2.0% -1.0 4.5% 12 12 1.0 16.5% 0.5 0.0% 0.0 0.0% -0.5 0.5% -1.0 0.5% 1.0 16.5% 0.5 0.0% 0.0 0.0% -0.5 0.5% -1.0 0.5% (c) Gemini Pro. (d) GPT-3.5-turbo. 𝜏 Acc 𝜏 Acc 8 8 1.0 89.5% 0.5 86.5%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7f05ffc2-f9fb-44d2-b412-edbd5f233fe6
# Rules -0.5 0.5% -1.0 0.5% 1.0 16.5% 0.5 0.0% 0.0 0.0% -0.5 0.5% -1.0 0.5% (c) Gemini Pro. (d) GPT-3.5-turbo. 𝜏 Acc 𝜏 Acc 8 8 1.0 89.5% 0.5 86.5% 0.0 78.0% -0.5 82.0% -1.0 85.5% 1.0 84.5% 0.5 67.5% 0.0 67.0% -0.5 63.5% -1.0 63.0% 10 10 1.0 89.0% 0.5 75.5% 0.0 70.5% -0.5 76.5% -1.0 77.0% 1.0 79.5% 0.5 58.0% 0.0 56.0% -0.5 52.5% -1.0 46.5% 11 11 1.0 84.5% 0.5 68.5% 0.0 67.5% -0.5 78.5% -1.0 75.5% 1.0 73.0% 0.5 41.5% 0.0 40.0% -0.5 46.0% -1.0 43.5% 12 12 1.0 80.5% 0.5 49.5% 0.0 61.5% -0.5 60.5% -1.0 72.5% 1.0 64.0% 0.5 39.0% 0.0 42.0% -0.5 33.5% -1.0 32.5% (a) GPT-4-turbo.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ae05b323-b40c-4357-9c4e-9b0887e1ff97
# Rules 5 46.0% -1.0 43.5% 12 12 1.0 80.5% 0.5 49.5% 0.0 61.5% -0.5 60.5% -1.0 72.5% 1.0 64.0% 0.5 39.0% 0.0 42.0% -0.5 33.5% -1.0 32.5% (a) GPT-4-turbo. (b) PaLM 2-L. 𝜏 Acc 𝜏 Acc 8 8 1.0 81.5% 0.5 73.0% 0.0 65.5% -0.5 73.0% -1.0 76.5% 1.0 68.5% 0.5 48.5% 0.0 45.5% -0.5 46.0% -1.0 39.5% 10 10 1.0 64.5% 0.5 48.5% 0.0 50.5% -0.5 62.0% -1.0 59.0% 1.0 47.0% 0.5 35.0% 0.0 30.0% -0.5 27.0% -1.0 29.5% 11 11 1.0 58.5% 0.5 54.0% 0.0 41.0% -0.5 51.0% -1.0 53.0% 1.0 46.5% 0.5 30.0% 0.0 24.5% -0.5 20.5% -1.0 15.5% 12 12 1.0 57.5% 0.5 33.0% 0.0 42.0% -0.5 45.0% -1.0 46.5% 1.0 36.5% 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
978ffda1-b412-496d-bbc9-20d573814844
# Rules -0.5 51.0% -1.0 53.0% 1.0 46.5% 0.5 30.0% 0.0 24.5% -0.5 20.5% -1.0 15.5% 12 12 1.0 57.5% 0.5 33.0% 0.0 42.0% -0.5 45.0% -1.0 46.5% 1.0 36.5% 0.5 18.0% 0.0 19.0% -0.5 17.5% -1.0 15.5% (a) GPT-4-turbo. (b) PaLM 2-L. | # Steps | Init Acc | Reorder Acc | # Steps | Init Acc | Reorder Acc | |------------------|--------------------|---------------|-----------|------------|---------------| | > | | | | | | | = | | | | | | | 2 | 94.1% | 85.0% | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2949859d-9817-488f-b869-cb01697d8288
# Rules | | | | | | 2 | 94.1% | 85.0% | | | | | > | | | | | | | = | | | | | | | 3 | 94.0% | 84.0% | | | | | > | | | | | | | =
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e20824f3-58ae-40d6-93d5-7dc20f923bf1
# Rules | | | | > | | | | | | | = | | | | | | | 4 | 94.3% | 82.8% | | | | | > | | | | | | | = | | | | | | | 5 | 92.4%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4a2fd007-eb21-456e-be15-3dec49f26aa0
# Rules | | = | | | | | | | 5 | 92.4% | 79.3% | | | | | > | | | | | | | = | | | | | | | 6 | 89.8% | 73.5% | | | | | > | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
05654cda-8121-4252-9a99-3a5486effb58
# Rules | 89.8% | 73.5% | | | | | > | | | | | | | = | | | | | | | 2 | 86.4% | 79.5% | | | | | > | | | | | | | = | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08939v1.md", "file_path": "paper_data/2402.08939v1.md", "file_size": 57756, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }