--- license: apache-2.0 --- # LogicLLaMA Model Card ## Model details LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules. It is trained by fine-tuning the LLaMA2-7B model on the [MALLS-v0.1](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset. **Model type:** This repo contains the LoRA delta weights for naive correction LogicLLaMA, which, given a pair of the NL statement and a predicted FOL rule, corrects the potential errors in the predicted FOL rule. This is used as a downstream model together with ChatGPT, where ChatGPT does the "heavy lifting" by predicting the initial translated FOL rule and then LogicLLaMA refines the rule by correcting potential errors. In our experiments, this mode yields better performance than ChatGPT and direction translation LogicLLaMA. We also provide the delta weights for other modes: - [direct translation LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-direct-translate-delta-v0.1) - [naive correction LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0.1) - [direct translation LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-direct-translate-delta-v0.1) - [naive correction LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-naive-correction-delta-v0.1) **License:** Apache License 2.0 ## Using the model Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA **Primary intended uses:** LogicLLaMA is intended to be used for research. ## Citation ``` @article{yang2023harnessing, title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation}, author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri}, journal={arXiv preprint arXiv:2305.15541}, year={2023} } ```