LogicLLaMA Model Card
Model details
LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules. It is trained by fine-tuning the LLaMA-7B model on the MALLS dataset.
Model type: This repo contains the LoRA delta weights for naive correction LogicLLaMA, which, given a pair of the NL statement and a predicted FOL rule, corrects the potential errors in the predicted FOL rule. This is used as a downstream model together with ChatGPT, where ChatGPT does the "heavy lifting" by predicting the initial translated FOL rule and then LogicLLaMA refines the rule by correcting potential errors. In our experiments, this mode yields better performance than ChatGPT and direction translation LogicLLaMA.
We also provide the delta weights for other modes:
License: Apache License 2.0
Using the model
Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA
Primary intended uses: LogicLLaMA is intended to be used for research.
Citation
@article{yang2023harnessing,
title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
journal={arXiv preprint arXiv:2305.15541},
year={2023}
}