yuan-yang commited on
Commit
775c224
1 Parent(s): 89fc92b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -1,3 +1,47 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+
6
+ # LogicLLaMA Model Card
7
+
8
+ ## Model details
9
+
10
+ LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules.
11
+ It is trained by fine-tuning the LLaMA2-7B model on the [MALLS-v0.1](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset.
12
+
13
+ **Model type:**
14
+ This repo contains the LoRA delta weights for naive correction LogicLLaMA, which,
15
+ given a pair of the NL statement and a predicted FOL rule, corrects the potential errors in the predicted FOL rule.
16
+ This is used as a downstream model together with ChatGPT,
17
+ where ChatGPT does the "heavy lifting" by predicting the initial translated FOL rule and then LogicLLaMA refines the rule by correcting potential errors.
18
+ In our experiments, this mode yields better performance than ChatGPT and direction translation LogicLLaMA.
19
+
20
+ We also provide the delta weights for other modes:
21
+ - [direct translation LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-direct-translate-delta-v0.1)
22
+ - [naive correction LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0.1)
23
+ - [direct translation LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-direct-translate-delta-v0.1)
24
+ - [naive correction LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-naive-correction-delta-v0.1)
25
+
26
+ **License:**
27
+ Apache License 2.0
28
+
29
+ ## Using the model
30
+
31
+ Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA
32
+
33
+
34
+ **Primary intended uses:**
35
+ LogicLLaMA is intended to be used for research.
36
+
37
+
38
+ ## Citation
39
+
40
+ ```
41
+ @article{yang2023harnessing,
42
+ title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
43
+ author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
44
+ journal={arXiv preprint arXiv:2305.15541},
45
+ year={2023}
46
+ }
47
+ ```