File size: 1,541 Bytes
b57e665
 
 
d74486b
 
 
 
 
 
 
 
 
 
 
489e393
 
 
 
d74486b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: apache-2.0
---

# LogicLLaMA Model Card

## Model details

LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules.
It is trained by fine-tuning the LLaMA2-7B model on the [MALLS-v0.1](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset.

**Model type:**
This repo contains the LoRA delta weights for direct translation LogicLLaMA, which directly translates the NL statement into a FOL rule in one go.
We also provide the delta weights for other modes:
- [direct translation LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-direct-translate-delta-v0.1)
- [naive correction LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0.1)
- [direct translation LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-direct-translate-delta-v0.1)
- [naive correction LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-naive-correction-delta-v0.1)

**License:**
Apache License 2.0

## Using the model

Check out how to use the model on our project page:  https://github.com/gblackout/LogicLLaMA


**Primary intended uses:**
LogicLLaMA is intended to be used for research.


## Citation

```
@article{yang2023harnessing,
      title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation}, 
      author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
      journal={arXiv preprint arXiv:2305.15541},
      year={2023}
}
```