yuan-yang's picture
Update README.md
489e393
---
license: apache-2.0
---
# LogicLLaMA Model Card
## Model details
LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules.
It is trained by fine-tuning the LLaMA2-7B model on the [MALLS-v0.1](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset.
**Model type:**
This repo contains the LoRA delta weights for direct translation LogicLLaMA, which directly translates the NL statement into a FOL rule in one go.
We also provide the delta weights for other modes:
- [direct translation LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-direct-translate-delta-v0.1)
- [naive correction LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0.1)
- [direct translation LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-direct-translate-delta-v0.1)
- [naive correction LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-naive-correction-delta-v0.1)
**License:**
Apache License 2.0
## Using the model
Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA
**Primary intended uses:**
LogicLLaMA is intended to be used for research.
## Citation
```
@article{yang2023harnessing,
title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
journal={arXiv preprint arXiv:2305.15541},
year={2023}
}
```