Sefika commited on
Commit
0473516
·
verified ·
1 Parent(s): 339b5c6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -106
README.md CHANGED
@@ -1,106 +1,9 @@
1
- ---
2
- license: gpl-3.0
3
- language:
4
- - en
5
- metrics:
6
- - f1
7
- - recall
8
- - precision
9
- ---
10
- # GraphMatcher
11
- The GraphMatcher aims to find the correspondes between two ontologies and outputs the possible alignments between them.
12
-
13
- The GraphMatcher leverages Graph Attention Network[2] in its neural network structure.
14
- The project leverages a new neighborhood aggregation algorithm, so it examines contribution of neighboring terms which have not been used in the previous matchers before.
15
-
16
- The project has been submitted to The 17th International Workshop on Ontology Matching's OAEI 2022 (ISWC-2022) for conference track and obtained the highest F1-measure in uncertain reference alignments among other experts participating to this challenge. Its system paper has been published, and it was invited to the poster presentation session.
17
-
18
- ## Set up
19
- * 1.) install requirements
20
- ``` pip install -r requirements.txt```
21
-
22
- * 2.) set the parameters in the config.ini
23
- ````
24
- [General]
25
- dataset = ------> name of a dataset e.g., conference.
26
- K = ------> the parameter for K fold cross-validation
27
- ontology_split = ------> True/False
28
- max_false_examples = ------>
29
-
30
- [Paths]
31
- dataset_folder = ------> a path to the ontologies
32
- alignment_folder = ------> a path to the reference alignments
33
- save_model_path = ------> save the model to the path
34
- load_model_path = ------> model path
35
- output_folder = ------> The output folder for the alignments
36
-
37
- [Parameters]
38
- max_paths = ------>
39
- max_pathlen = ------> ( number of neighboring concepts' types: Equivalent class, subclass of(general to specific or specific to general(2))...
40
- [Hyperparameters]
41
-
42
- lr = ------> learning rate
43
- num_epochs = ------> number of epochs
44
- weight_decay = ------> Weight decay
45
- batch_size = ------> Batch Size (8/16/32)
46
-
47
- ````
48
-
49
- * 3.) train the model
50
- ```python
51
- python src/train_model.py
52
-
53
- ```
54
- * 4.) test the model
55
- ```python
56
- python src/test_model.py ${source.rdf} ${target.rdf}
57
- ```
58
- ### Sample Alignment:
59
- ```xml
60
- <map>
61
- <Cell>
62
- <entity1 rdf:resource='http://conference#has_the_last_name'/>
63
- <entity2 rdf:resource='http://confof#hasSurname'/>
64
- <relation>=</relation>
65
- <measure rdf:datatype='http://www.w3.org/2001/XMLSchema#float'>0.972</measure>
66
- </Cell>
67
- </map>
68
- ```
69
-
70
- * 5.) evaluate the model with the MELT
71
-
72
-
73
- Note: The codes in train_model.py and test_model.py are partially based on the VeeAlign[2] project with the permission of its main author. I would like to thank the main author.
74
-
75
- ## References:
76
- [1]
77
- ````
78
- @inproceedings{iyer-etal-2021-veealign,
79
- title = "{V}ee{A}lign: Multifaceted Context Representation Using Dual Attention for Ontology Alignment",
80
- author = "Iyer, Vivek and
81
- Agarwal, Arvind and
82
- Kumar, Harshit",
83
- booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
84
- month = nov,
85
- year = "2021",
86
- address = "Online and Punta Cana, Dominican Republic",
87
- publisher = "Association for Computational Linguistics",
88
- url = "https://aclanthology.org/2021.emnlp-main.842",
89
- doi = "10.18653/v1/2021.emnlp-main.842",
90
- pages = "10780--10792",
91
- }
92
- ````
93
- [2]
94
- ````
95
- @misc{https://doi.org/10.48550/arxiv.1710.10903,
96
- title = {Graph Attention Networks},
97
- author = {Veličković, Petar and Cucurull, Guillem and Casanova, Arantxa and Romero, Adriana and Liò, Pietro and Bengio, Yoshua},
98
- keywords = {Machine Learning (stat.ML), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), FOS: Computer and information sciences, FOS: Computer and information sciences},
99
- url = {https://arxiv.org/abs/1710.10903},
100
- publisher = {arXiv},
101
- doi = {10.48550/ARXIV.1710.10903},
102
- year = {2017},
103
- copyright = {arXiv.org perpetual, non-exclusive license}
104
- }
105
-
106
- ````
 
1
+ # My Model
2
+ This is my model card.
3
+
4
+ ## Usage
5
+ ```python
6
+ from transformers import AutoModel, AutoTokenizer
7
+ tokenizer = AutoTokenizer.from_pretrained("Sefika/GraphMatcher")
8
+ model = AutoModel.from_pretrained("Sefika/GraphMatcher")
9
+