yuan-yang commited on
Commit
c1c4bbd
1 Parent(s): 905a70c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # LogicLLaMA Model Card
6
+
7
+ ## Model details
8
+
9
+ LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules.
10
+ It is trained by fine-tuning the LLaMA2-13B model on the [MALLS-v0.1](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset.
11
+
12
+ **Model type:**
13
+ This repo contains the LoRA delta weights for direct translation LogicLLaMA, which directly translates the NL statement into a FOL rule in one go.
14
+ We also provide the delta weights for other modes:
15
+ - [direct translation LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-direct-translate-delta-v0.1)
16
+ - [naive correction LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0.1)
17
+ - [direct translation LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-direct-translate-delta-v0.1)
18
+ - [naive correction LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-naive-correction-delta-v0.1)
19
+
20
+ **License:**
21
+ Apache License 2.0
22
+
23
+ ## Using the model
24
+
25
+ Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA
26
+
27
+
28
+ **Primary intended uses:**
29
+ LogicLLaMA is intended to be used for research.
30
+
31
+
32
+ ## Citation
33
+
34
+ ```
35
+ @article{yang2023harnessing,
36
+ title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
37
+ author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
38
+ journal={arXiv preprint arXiv:2305.15541},
39
+ year={2023}
40
+ }
41
+ ```