razent commited on
Commit
a35a820
1 Parent(s): 7fd45e9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPBERT MLM+WSO (Initialized)
2
+ ## Introduction
3
+ Paper: [SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs](https://arxiv.org/abs/2106.09997)
4
+ Authors: _Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen_
5
+ ## How to use
6
+ For more details, do check out [our Github repo](https://github.com/heraclex12/NLP2SPARQL).
7
+ Here is an example in Pytorch:
8
+ ```python
9
+ from transformers import AutoTokenizer, AutoModel
10
+ tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-wso-base')
11
+ model = AutoModel.from_pretrained("razent/spbert-mlm-wso-base")
12
+ text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
13
+ encoded_input = tokenizer(text, return_tensors='pt')
14
+ output = model(**encoded_input)
15
+ ```
16
+ or Tensorflow
17
+ ```python
18
+ from transformers import AutoTokenizer, TFAutoModel
19
+ tokenizer = AutoTokenizer.from_pretrained('razent/spbert-mlm-wso-base')
20
+ model = TFAutoModel.from_pretrained("razent/spbert-mlm-wso-base")
21
+ text = "select * where brack_open var_a var_b var_c sep_dot brack_close"
22
+ encoded_input = tokenizer(text, return_tensors='tf')
23
+ output = model(encoded_input)
24
+ ```
25
+ ## Citation
26
+ ```
27
+ @misc{tran2021spbert,
28
+ title={SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs},
29
+ author={Hieu Tran and Long Phan and James Anibal and Binh T. Nguyen and Truong-Son Nguyen},
30
+ year={2021},
31
+ eprint={2106.09997},
32
+ archivePrefix={arXiv},
33
+ primaryClass={cs.CL}
34
+ }
35
+ ```