diff --git "a/20240921/2212.05581v4.json" "b/20240921/2212.05581v4.json"
new file mode 100644--- /dev/null
+++ "b/20240921/2212.05581v4.json"
@@ -0,0 +1,549 @@
+{
+ "title": "Efficient Relation-aware Neighborhood Aggregation in Graph Neural Networks via Tensor Decomposition",
+ "abstract": "Numerous Graph Neural Networks (GNNs) have been developed to tackle the challenge of Knowledge Graph Embedding (KGE). However, many of these approaches overlook the crucial role of relation information and inadequately integrate it with entity information, resulting in diminished expressive power. In this paper, we propose a novel knowledge graph encoder that incorporates tensor decomposition within the aggregation function of Relational Graph Convolutional Network (R-GCN). Our model enhances the representation of neighboring entities by employing projection matrices of a low-rank tensor defined by relation types. This approach facilitates multi-task learning, thereby generating relation-aware representations. Furthermore, we introduce a low-rank estimation technique for the core tensor through CP decomposition, which effectively compresses and regularizes our model. We adopt a training strategy inspired by contrastive learning, which relieves the training limitation of the 1-N method inherent in handling vast graphs. We outperformed all our competitors on two common benchmark datasets, FB15k-237 and WN18RR, while using low-dimensional embeddings for entities and relations. Codes are available here: https://github.com/pbaghershahi/TGCN.git.",
+ "sections": [
+ {
+ "section_id": "1",
+ "parent_section_id": null,
+ "section_name": "Introduction",
+ "text": "Knowledge Graphs (KGs) have broad applications in real-world problems. Nonetheless, their evolving nature often leads to numerous missing relations. Hence, predicting these missing relations becomes a crucial challenge known as Knowledge Graph Completion (KGC). The approach to addressing KGC involves embedding KGs in low dimensions, a process known as Knowledge Graph Embedding (KGE). These embeddings are then utilized to predict the missing links in the knowledge graph.\nA group of methods within Knowledge Graph Completion (KGC) falls under the category of embedding-based approaches [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. These methods embed KGs into a feature space, preserving their semantic relations. Additionally, neural network models have demonstrated impressive performance in KGC such as Convolutional Neural Networks (CNN) [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###] and Transformers [8 ###reference_b8###, 9 ###reference_b9###]. However, many of these methods independently embed entities and relations without considering their local neighborhoods and the rich information within the graph structures. In contrast, Graph Neural Networks (GNNs) [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###] are a type of method that encodes the graph structure and aggregates information over a local neighborhood. These approaches aid a new entity in acquiring an expressive representation by leveraging the information from its observed neighbors.\nHowever, the utilization of high-dimensional embedding often raises scalability challenges for state-of-the-art methods [14 ###reference_b14###, 3 ###reference_b3###, 15 ###reference_b15###] when embedding Knowledge Graphs (KGs). The problem is exacerbated when Graph Neural Networks (GNNs) generate secondary undirected graphs from the original KGs [13 ###reference_b13###, 16 ###reference_b16###]. More importantly, many GNNs neglect the importance of relations while embedding entities due to inefficient integration of the information of entities and relations [10 ###reference_b10###, 17 ###reference_b17###, 11 ###reference_b11###] which cannot improve expressiveness [18 ###reference_b18###]. On the other hand, the group of tensor decomposition-based methods [19 ###reference_b19###, 14 ###reference_b14###, 20 ###reference_b20###] leverage relations in encoding entities effectively, whereas they embed entities independently, neglecting the underlying graph structure in the process.\nIn this paper, we introduce a new framework to exploit these two paradigms to tackle their limitations. We take advantage of the Tucker decomposition in the aggregation function of R-GCN [10 ###reference_b10###] to enhance the integration of the information of entities and relations. The Tucker decomposition offers knowledge sharing because the transformation matrices that are applied to neighboring entities are low-rank and relation-dependent which helps with efficiently extracting the interaction between an entity and the relation. Unlike previous tensor decomposition models, our method generates representations of neighboring entities instead of scoring triplets, making our model a general KG encoder. We use CANDECOMP/PARAFAC (CP) decomposition as a regularization method for low-rank approximation of the core tensor of our model. Also, we utilize a contrastive loss that solves the scalability problem of 1-N training method for training on enormous KGs. Although utilizing low dimensionality of embeddings, our method outperforms all our competitors on both FB15k-237 and WN18RR as standard KGC benchmarks. We also show that our method improves the base R-GCN on FB15k-237 by 36% with the same decoder."
+ },
+ {
+ "section_id": "2",
+ "parent_section_id": null,
+ "section_name": "Related Work",
+ "text": ""
+ },
+ {
+ "section_id": "3",
+ "parent_section_id": null,
+ "section_name": "Background",
+ "text": ""
+ },
+ {
+ "section_id": "4",
+ "parent_section_id": null,
+ "section_name": "Methods",
+ "text": "In KGs, relations have as rich information as entities. Although R-GCN generalizes GCN to KGs, its weakness is inefficiently combining the information of relations and entities. In other words, it uses relations for indexing weight matrices that project representations of entities. This method cannot enhance the expressiveness of the model [18 ###reference_b18###] because the learned knowledge of entities is not shared with relations. Therefore, R-GCN does not optimize the embedding of relations properly.\nIn this work, we propose Tucker Graph Convolutional Networks (TGCN), a GNN model to address the above limitations. Inspired by TuckER, we change the aggregation function of R-GCN to take advantage of parameter sharing by multi-task learning through relations. In our proposed aggregation function, the representations of the neighbors are transformed by applying learnable weight matrices, which their parameters are defined by the embedding of the relation type of each entity. Following R-GCN, a universal projection matrix is applied to the representation of the entity itself. The output of each TGCN layer for an entity is computed as a normalized sum of all projected representations. Our propagation model for updating the representation of an entity is as follows:\nwhere is a function of each neighboring entity representation and its relation type embedding. Here, is the set of neighbors of are related by , is a normalization factor, is the embedding of , is the core weight tensor of layer , and is the loop weight matrix. Also, each is assigned to an embedding vector and . and are the dimensionalities of the embeddings of entities and relations respectively.\nEquation 3 ###reference_### shows that contrary to TuckER, TGCN embeds the rich information of the graph structure in its representations. Also, learned knowledge is accessible through the whole KG by parameter sharing due to the low-rank core tensor."
+ },
+ {
+ "section_id": "4.1",
+ "parent_section_id": "4",
+ "section_name": "Model Compression",
+ "text": "The number of trainable parameters of the Tucker core tensor is of . Therefore, in case of using high dimensional embeddings, this leads to overfitting and memory issues. To tackle this problem, we take a step forward in terms of parameter sharing and introduce a model compression method utilizing CANDECOMP/PARAFAC (CP) decomposition [41 ###reference_b41###, 42 ###reference_b42###] for low-rank approximation of the core tensor. This approximation encourages multi-task learning more and significantly decreases the number of free parameters of the core tensor. We also anticipate that the method can be used as an effective regularization method.\nCP Decomposition: As a tensor rank decomposition method, CP decomposition factorizes a tensor into a sum of rank-one tensors. An approximation of a third-order tensor using CP decomposition is as follows:\nin which is the rank of the tensor and , , and .\nIf we define , and as factor matrices of the rank-one components, then the above model can be written in terms of the frontal slices of :\nin which is a factor matrix of the rank-one components. Finally, we can write CP decomposition in short notation as:\nNow, we rewrite the TGCN propagation model for updating an entity representation using CP decomposition as follows:\nin which is a product of tensors , , and . Finding matrices , , and is originally an optimization problem, but in our case, we try to learn these matrices. Therefore, we select a value for which we call the number of bases and it is equivalent to the rank () of the approximated tensor.\nOur general KG encoder can integrate with many decoding methods (scoring functions). Here, we use DistMult [23 ###reference_b23###] and TuckER [19 ###reference_b19###] and consider being the generated representations for source and target entities.\nDistMult is a fast and simple decoder without extra parameters and which computes a triplet score by a three-way multiplication as follows:\nPreviously we used Tucker decomposition to produce representation vectors. However, it can be used to score triplets in the following way as proposed in TuckER:\nEventually, we add a logistic sigmoid layer to evaluate how probable a triplet is. Notably, we expect TGCN to perform better using more efficient decoding methods."
+ },
+ {
+ "section_id": "4.2",
+ "parent_section_id": "4",
+ "section_name": "Training",
+ "text": "Most state-of-the-art models follow 1-N method [5 ###reference_b5###] for training, and they use Binary Cross Entropy (BCE) loss function for optimization. For a triplet the loss function is defined as:\nin which is the logistic sigmoid function.\nAlthough the 1-N approach has good performance, for each training iteration, it requires operations. Thus, a model faces scalability issues while could be a large number.\nSince KGC is a self-supervised problem, we approach the above issue by a training method inspired by contrastive learning which aids in producing expressive representations by preserving high-level features while filtering out low-level features, such as noise [43 ###reference_b43###]. Specifically, our training method is based on SimCLR [44 ###reference_b44###].\nSimCLR:\nA contrastive approach is in which two augmented samples are generated from each image in a training batch. The objective is to make the model-generated representations of these samples close to each other and distant from other samples within the batch. SimCLR incorporates the NT-Xent loss for training."
+ },
+ {
+ "section_id": "4.2.1",
+ "parent_section_id": "4.2",
+ "section_name": "4.2.1 1-b Training",
+ "text": "We are motivated by the fact that in KGs, two entities with a relation are likely to be close to each other in feature space compared to entities to which they are not connected. This intuition is similar to that of SimCLR. Therefore, we use NT-Xent loss as follows:\nin which is temperature, and is the set of entities in each training batch.\nContrary to 1-N training method, multiplication operations in 1-b method are just between unique entities of a single batch. Hence, the operational complexity of each iteration reduces to and can be set based on the computational limitations. We anticipate the representation generated by NT-Xent loss is more expressive than BCE loss, but we leave this for future research."
+ },
+ {
+ "section_id": "5",
+ "parent_section_id": null,
+ "section_name": "Experiments",
+ "text": ""
+ },
+ {
+ "section_id": "5.1",
+ "parent_section_id": "5",
+ "section_name": "Using Random Subgraphs",
+ "text": "To train TGCN we randomly take subgraphs from the whole KG in each iteration. This is necessary because real-world KGs are typically enormous, making it impractical to process them in their entirety. Utilizing random subgraphs during training is akin to injecting noise, thereby introducing a regularization effect that enhances performance. However, maintaining an appropriate subgraph size is crucial. On one hand, a significantly large random subgraph introduces excessive noise, particularly when entities in large KGs have high degree centrality, leading to counterproductive noise. On the other hand, if we keep the size of random subgraphs constant, each entity can still access more distant neighbors and their information. As a result, the size of random subgraphs, denoted as , plays a pivotal role in shaping the performance of our model."
+ },
+ {
+ "section_id": "5.2",
+ "parent_section_id": "5",
+ "section_name": "Datasets",
+ "text": "FB15k-237 [45 ###reference_b45###] and WN18RR [5 ###reference_b5###] are the standard KGC benchmarks we used for the evaluation of our model [5 ###reference_b5###]. Find the statistics of the datasets in the Appendix B ###reference_###. For each dataset, we add as the inverse of each triplet , known as reciprocal learning [14 ###reference_b14###], with being the inverse of . The model is trained using both original triplets and their inverse ones."
+ },
+ {
+ "section_id": "5.3",
+ "parent_section_id": "5",
+ "section_name": "Evaluation Protocol",
+ "text": "We used Mean Reciprocal Rank (MRR) and Hits@ ratios in filtered setting [2 ###reference_b2###] as two commonly used ranking-based metrics for KGC to to evaluate our model. For fair evaluation, the random protocol proposed by [46 ###reference_b46###] is used to randomly shuffle all triplets before scoring and sorting them."
+ },
+ {
+ "section_id": "6",
+ "parent_section_id": null,
+ "section_name": "Results",
+ "text": "We evaluate the performance of TGCN against a few of the best GNN-based encoders and a few powerful baseline models. Our experimental results on FB15k-237 and WN18RR are illustrated in Table 1 ###reference_###. Overall, TGCN achieves the best results on both datasets. Worthwhile to note, most state-of-the-art models and our competitors, like Tucker, use high-dimensional embeddings, leading to severe scalability issues on massive KGs in web or social media applications. However, TGCN employs a comparatively lower dimensionality of embedding, alleviating this problem.\nDoE\n\nFB15k-237\nWN18RR\n\n\n\n\n\nMRR\n\n\n\nHits@1\n\n\n\nHits@3\n\n\n\nHits@10\n\n\n\nMRR\n\n\n\nHits@1\n\n\n\nHits@3\n\n\n\nHits@10\n\n\n\n\nDistMult [10 ###reference_b10###]\n\n\n\n100\n\n\n\n.241\n\n\n\n.155\n\n\n\n.263\n\n\n\n.419\n\n\n\n.430\n\n\n\n.390\n\n\n\n.440\n\n\n\n.490\n\n\n\n\nR-GCN [10 ###reference_b10###]\n\n\n\n500\n\n\n\n.250\n\n\n\n.150\n\n\n\n.260\n\n\n\n.420\n\n\n\n\u2014\n\n\n\n\u2014\n\n\n\n\u2014\n\n\n\n\u2014\n\n\n\n\nComplEx [20 ###reference_b20###]\n\n\n\n400\n\n\n\n.247\n\n\n\n.158\n\n\n\n.275\n\n\n\n.428\n\n\n\n.440\n\n\n\n.410\n\n\n\n.460\n\n\n\n.51\n\n\n\n\nConvE [5 ###reference_b5###]\n\n\n\n200\n\n\n\n.325\n\n\n\n.237\n\n\n\n.356\n\n\n\n.501\n\n\n\n.430\n\n\n\n.400\n\n\n\n.440\n\n\n\n.520\n\n\n\n\nRotatE [1 ###reference_b1###]\n\n\n\n1000\n\n\n\n.338\n\n\n\n.241\n\n\n\n.375\n\n\n\n.533\n\n\n\n.476\n\n\n\n.428\n\n\n\n.492\n\n\n\n.571\n\n\n\n\nSACN [17 ###reference_b17###]\n\n\n\n200\n\n\n\n.350\n\n\n\n.260\n\n\n\n.390\n\n\n\n.540\n\n\n\n.470\n\n\n\n.430\n\n\n\n.480\n\n\n\n.540\n\n\n\n\nCOMPGCN [11 ###reference_b11###]\n\n\n\n100\n\n\n\n.355\n\n\n\n.264\n\n\n\n.390\n\n\n\n.535\n\n\n\n.479\n\n\n\n.443\n\n\n\n.494\n\n\n\n.546\n\n\n\n\nTuckER [19 ###reference_b19###]\n\n\n\n200\n\n\n\n.358\n\n\n\n.266\n\n\n\n.394\n\n\n\n.544\n\n\n\n.470\n\n\n\n.444\n\n\n\n.482\n\n\n\n.526\n\n\n\n\nTGCN-DistMult\n\n\n\n100\n\n\n\n.339\n\n\n\n.249\n\n\n\n.370\n\n\n\n.517\n\n\n\n.452\n\n\n\n.419\n\n\n\n.461\n\n\n\n.516\n\n\n\n\nTGCN-Tucker\n\n\n\n100\n\n\n\n.359\n\n\n\n.266\n\n\n\n.396\n\n\n\n.542\n\n\n\n.482\n\n\n\n.441\n\n\n\n.500\n\n\n\n.560\nOn FB15k-237, TuckER is our closets competitor. It takes advantage of multi-task learning by parameter sharing through relations. Formally, relation types define the projection matrices applying to the representations of source entities, so entities having the same relation are projected into the same region of a feature space. This approach, while effective, reveals challenges in WN18RR with fewer relation types, causing overlapped feature space regions with indistinguishable decision boundaries between.\nOn the contrary, the projection matrices of TGCN encoder are applied to the neighbors of entities instead of the entities directly, and their projected representations are aggregated and combined with the previous information of the entities. Therefore, TGCN alleviates the above problem and it performs well on both datasets.\nLastly, unlike decoding/scoring methods such as TuckER, which are specifically proposed for link prediction, TGCN, COMPGCN, SACN, and R-GCN are general KG encoders so are applicable in node-level and graph-level tasks as well. Also, these encoders can integrate with different decoding methods for KGC."
+ },
+ {
+ "section_id": "6.1",
+ "parent_section_id": "6",
+ "section_name": "The Effect of The Decoding Methods",
+ "text": "Table 1 ###reference_### clearly shows that TGCN can adaptively combine with decoding methods to improve their performance. TGCN has improved the MRR of DistMult by on FB15k-237 and by on WN18RR. Moreover, in combination with TuckER decoder, TGCN has increased its MRR by on WN18RR and by on FB15k-237. Though, it is noteworthy that TGCN decreases DoE by compared to TuckER.\nDecoder\n\n\n\nCP\n\n\n\nMRR\n\n\n\nHits@1\n\n\n\nHits@3\n\n\n\nHits@10\n\n\n\n#NFP\n\n\n\n#EFP\n\n\n\n\n\n\nFB15k-237\n\n\n\nDistMult\n\n\n\nNo\n\n\n\n.339\n\n\n\n.249\n\n\n\n.370\n\n\n\n.517\n\n\n\n2.02M\n\n\n\n1.50M\n\n\n\n\n\n\nYes\n\n\n\n.334\n\n\n\n.246\n\n\n\n.365\n\n\n\n.510\n\n\n\n0.08M\n\n\n\n\n\n\nTuckER\n\n\n\nNo\n\n\n\n.359\n\n\n\n.266\n\n\n\n.396\n\n\n\n.542\n\n\n\n3.02M\n\n\n\n\n\n\n\nYes\n\n\n\n.353\n\n\n\n.263\n\n\n\n.387\n\n\n\n.532\n\n\n\n1.08M\n\n\n\n\n\nWN18RR\n\n\n\nDistMult\n\n\n\nNo\n\n\n\n.452\n\n\n\n.419\n\n\n\n.461\n\n\n\n.516\n\n\n\n2.02M\n\n\n\n4.10M\n\n\n\n\n\n\nYes\n\n\n\n.437\n\n\n\n.264\n\n\n\n.392\n\n\n\n.535\n\n\n\n0.08M\n\n\n\n\n\n\nTuckER\n\n\n\nNo\n\n\n\n.482\n\n\n\n.441\n\n\n\n.500\n\n\n\n.560\n\n\n\n3.02M\n\n\n\n\n\n\n\nYes\n\n\n\n.471\n\n\n\n.438\n\n\n\n.484\n\n\n\n.532\n\n\n\n1.08M"
+ },
+ {
+ "section_id": "6.2",
+ "parent_section_id": "6",
+ "section_name": "The Effect of The Model Compression Method",
+ "text": "Our empirical results on FB15k-237 and WN18RR with and without using CP decomposition for model compression are demonstrated in Table 2 ###reference_###. The number of Embedding Free Parameters (#EFP) indicates the number of free parameters of the embedding matrices for entities and relations. The number of Nonembedding Free Parameters (#NFP) indicates the total number of all other free parameters of the model. We show more experimental results of the effectiveness of using CP decomposition for regularization in the Appendix A ###reference_###.\nUsing low-rank decomposition of the core tensor by CP decomposition has significantly decreased the free parameters of TGCN encoder on both datasets. Thus they can be ignored and only the parameters of the decoder remain.\nThe performance gap between using CP decomposition and not using it on FB15k-237 is minor, whereas it is more evident on WN18RR. We conjecture that the decrease in performance when employing CP decomposition might be attributed to excessive parameter sharing through relations. The diversity of relations in WN18RR is considerably lower than in FB15k-237. Therefore, when making a low-rank approximation of the core tensor, the transformations applied to neighbors become more similar, resulting in overlapping projection spaces. A higher number of relations alleviates this overlap, and the results on FB15k-237 indicate that the performance difference is not considerable.\nIt can be seen in Table 2 ###reference_### that in the case of using DisMult Decoder, #NFP is lower than 0.1M. Importantly, TGCN has improved DistMult performance on FB15k-237 and WN18RR. This considerable performance increase is attained by a negligible number of extra parameters, which proves the effectiveness of TGCN as a general encoder."
+ },
+ {
+ "section_id": "6.2.1",
+ "parent_section_id": "6.2",
+ "section_name": "6.2.1 The Effect of The Number of Bases",
+ "text": "To investigate the effect of the number of bases on TGCN using CP decomposition, we show model performance and the number of Encoder Nonembedding Free Parameters (#ENFP) as a function of the number of bases . Figure 1 ###reference_### shows the results of this experiment. The Hyper-parameters are only tuned for , and fixed for other values of .\nAs we expected, the number of bases is effective since the performance of TGCN obviously increases slightly using a high number of bases. On the other hand, its #ENFP increases linearly, leading to more complexity. Besides, we can see that #ENFP is comparably low even in high values of , and we can neglect it in . So we choose for a trade-off. This shows that TGCN is highly applicable for low memory and computation limitations.\nDoE\n\nFB15k-237\nWN18RR\n\n\n\n\n\nMRR\n\n\n\nHits@1\n\n\n\nHits@3\n\n\n\nHits@10\n\n\n\nMRR\n\n\n\nHits@1\n\n\n\nHits@3\n\n\n\nHits@10\n\n\n\n\n\n\nTGCN-Tucker\n\n\n\n32\n\n\n\n.342\n\n\n\n.254\n\n\n\n.373\n\n\n\n.516\n\n\n\n.455\n\n\n\n.419\n\n\n\n.471\n\n\n\n.521\n\n\n\n\n\n64\n\n\n\n.355\n\n\n\n.262\n\n\n\n.388\n\n\n\n.537\n\n\n\n.474\n\n\n\n.434\n\n\n\n.491\n\n\n\n.545\n\n\n\n\n\n100\n\n\n\n.359\n\n\n\n.266\n\n\n\n.396\n\n\n\n.542\n\n\n\n.482\n\n\n\n.441\n\n\n\n.500\n\n\n\n.560"
+ },
+ {
+ "section_id": "6.3",
+ "parent_section_id": "6",
+ "section_name": "The Effect of The Dimensionality of Embedding",
+ "text": "KGs commonly used in applications where there are numerous entities and relation types because of they represent abstract forms of information. Therefore, KGE models must poses the potential of scalability to large datasets. One essential feature to meet this scalability criteria is the dimensionality of embeddings. Therefore, in this experiment we validate the performance of TGCN when the dimensionality of embeddings are considerably low. Since the number of entity types is mostly the bottleneck of real-world applications, here, we change only the dimensionality of embeddings for entities while fixing it for relations as .\nTable 3 ###reference_### shows the competitive performance of TGCN with low-domensional embeddings showing the potential of our model to be scaled and utilized for huge datasets. Especially when the model can still outperform most of the baselines except for TuckER, compared with Table 2, on both datasets while these methods use high-dimensional embeddings."
+ },
+ {
+ "section_id": "6.4",
+ "parent_section_id": "6",
+ "section_name": "The Effect of The Size of Random Subgraphs",
+ "text": "As we expected and shown in Figure 2 ###reference_###, the size of random subgraphs considerably impacts the performance. This is due to our entity representation updating function (discussed in more detail in Section 5.1 ###reference_###). Hyperparameters are tuned only with on WN18RR, and fixed for other values of .\nClearly, by increasing , TGCN performance improves on both datasets. Nevertheless, there is only a slight performance increase in high values of ; thus, generalization might decrease after a peak in higher values of . Concretely, by increasing each entity has broader access to its neighbors, providing it with more information. However, subgraph sampling loses its regularization effect (see Section 5.1 ###reference_###) simultaneously, resulting in weaker performance."
+ },
+ {
+ "section_id": "7",
+ "parent_section_id": null,
+ "section_name": "Conclusion",
+ "text": "We introduce TGCN, a general KG encoder, by incorporating Tucker decomposition in the aggregation function of R-GCN to efficiently integrate the information of relations and entities. Specifically, the projection matrices applied to the representations of neighbor entities depend on the relations. We compress our model using CP decomposition and discuss its considerable regularizing effect. Also, inspired by contrastive learning, we train our model using a cost function that tackle the scalability issue of the 1-N method for training on huge graphs. Our results show TGCN with embeddings of considerably lower dimensionality achieves superior performance to all the baselines on FB15k-237 and WN18RR."
+ }
+ ],
+ "appendix": [
+ {
+ "section_id": "Appendix 1",
+ "parent_section_id": null,
+ "section_name": "Appendix A More Experiments",
+ "text": "To verify the claim that our model compression method can occasionally improve the performance of similar models as a regularizer, we use CP decomposition as an alternative for the original regularization methods in R-GCN (basis decomposition and block decomposition) [10 ###reference_b10###]. Table 4 ###reference_### shows the performance of R-GCN using block decomposition, basis decomposition, and CP decomposition. These results are attained by implementing R-GCN using the original hyperparameters of the paper.\nWe can see that CP decomposition performs successfully as a regularization method and has decreased the values of #NFP and #EFP. Interestingly, this parameter reduction not only does not harm the model performance but also improves it.\nDecomposition\n\n\n\nDoE\n\n\n\nMRR\n\n\n\n#NFP\n\n\n\n#EFP\n\n\n\n\n\n\nR-GCN\n\n\n\nBlock\n\n\n\n500\n\n\n\n.238\n\n\n\n2.32M\n\n\n\n7.39M\n\n\n\n\n\nBasis\n\n\n\n100\n\n\n\n.220\n\n\n\n2.05M\n\n\n\n1.48M\n\n\n\n\n\nCP\n\n\n\n100\n\n\n\n.239\n\n\n\n0.13M\n\n\n\n1.48M"
+ },
+ {
+ "section_id": "Appendix 2",
+ "parent_section_id": null,
+ "section_name": "Appendix B Datasets",
+ "text": "We used FB15k-237 and WN18RR in our experiments which are subsets of FB15k and WN [2 ###reference_b2###] and have the issue of information leakage of the training sets to test and validation sets. Statistics of the datasets are summarized in Table 5 ###reference_###."
+ },
+ {
+ "section_id": "Appendix 3",
+ "parent_section_id": null,
+ "section_name": "Appendix C Experimental Details",
+ "text": "Our model is implemented in PyTorch [47 ###reference_b47###] and trained on single GPU core NVIDIA-L40. We used Adam [48 ###reference_b48###] to optimize our model and used norm of the relations and entities embedding matrices for regularization with a factor () of . The dimensionality of embeddings are fixed to and for the main experiments on both datasets. We chose the hyper-parameters based on the performance of our model on the validation set according to MRR using random search. We chose learning rate out of , with a step decay of every iteration. The number of bases is fixed to on both datasets to keep #ENFP low while attaining favorable performance.\nNext, all hyper-parameters are fixed except for dropout rates. We tuned dropout rates of encoder input (), hidden states (), output (), and the decoder () in . We leave choosing higher , and for future studies. Table 6 ###reference_### contains all the settings in our experiments."
+ }
+ ],
+ "tables": {
+ "1": {
+ "table_html": "\nTable 1: KGC results on the benchmark datasets. Results of are taken from [5] and results of are taken from [1]. Other results are taken from original papers. DoE stands for Dimensionality of Embeddings. Overall, TGCN-Tucker has superior performance to all baselines on both datasets although it has the lowest dimensionality of embeddings. TGCN-Distmult improves the simple corresponding decoder DistMult significantly.\n
\n",
+ "capture": "Table 1: KGC results on the benchmark datasets. Results of are taken from [5] and results of are taken from [1]. Other results are taken from original papers. DoE stands for Dimensionality of Embeddings. Overall, TGCN-Tucker has superior performance to all baselines on both datasets although it has the lowest dimensionality of embeddings. TGCN-Distmult improves the simple corresponding decoder DistMult significantly."
+ },
+ "2": {
+ "table_html": "\nTable 2: \nEffect of CP decomposition as model compression method on TGCN performance and the number of free parameters of the model. This method considerably reduces the number of trainable parameters on both datasets with different decoders.\n\n
\n",
+ "capture": "Table 2: \nEffect of CP decomposition as model compression method on TGCN performance and the number of free parameters of the model. This method considerably reduces the number of trainable parameters on both datasets with different decoders.\n"
+ },
+ "3": {
+ "table_html": "\nTable 3: The effect of the dimensionality of embeddings. Results show that TGCN has competitive performance even with low-dimensional embeddings making it highly scalable to huge datasets because of the reduction in required memory and computation.\n
\n",
+ "capture": "Table 3: The effect of the dimensionality of embeddings. Results show that TGCN has competitive performance even with low-dimensional embeddings making it highly scalable to huge datasets because of the reduction in required memory and computation."
+ },
+ "4": {
+ "table_html": "\nTable 4: KGC results of R-GCN on FB15k-237 and WN18RR using basis decomposition, block decomposition, and CP decomposition for regularization.\n
\n",
+ "capture": "Table 4: KGC results of R-GCN on FB15k-237 and WN18RR using basis decomposition, block decomposition, and CP decomposition for regularization."
+ },
+ "5": {
+ "table_html": "
\nTable 5: Statistics of the datasets used for evaluation. The number of triplets in training, test, and validation sets are shown. Also, and are the total number of entities and relations respectively.\n
\n\n
\n
\n\nDatasets\n\n
\n
\n\n#Entities\n\n
\n
\n\n#Relations\n\n
\n
\n\nTraining\n\n
\n
\n\nValidation\n\n
\n
\n\nTest\n\n
\n
\n\n\n
\n
\n\nFB15k-237\n\n
\n
\n\n14541\n\n
\n
\n\n237\n\n
\n
\n\n272115\n\n
\n
\n\n17535\n\n
\n
\n\n20466\n\n
\n
\n
\n
\n\nWN18RR\n\n
\n
\n\n40943\n\n
\n
\n\n11\n\n
\n
\n\n86835\n\n
\n
\n\n3034\n\n
\n
\n\n3134\n\n
\n
\n\n
\n
",
+ "capture": "Table 5: Statistics of the datasets used for evaluation. The number of triplets in training, test, and validation sets are shown. Also, and are the total number of entities and relations respectively."
+ },
+ "6": {
+ "table_html": "
\nTable 6: Hyper-parameters for the results of the experiments.\n
\n\n
\n
\n\nDatasets\n\n
\n
\n\n\n\n
\n
\n\n\n\n
\n
\n\nDecoder\n\n
\n
\n\nCP\n\n
\n
\n\n\n\n
\n
\n\n\n\n
\n
\n\n\n\n
\n
\n\n\n\n
\n
\n\n\n\n
\n
\n\n\n
\n
\n\nFB15k-237\n\n
\n
\n\n100000\n\n
\n
\n\n0.005\n\n
\n
\n\nTucker\n\n
\n
\n\nNo\n\n
\n
\n\n0.0\n\n
\n
\n\n0.1\n\n
\n
\n\n0.0\n\n
\n
\n\n0.2\n\n
\n
\n\n0.3\n\n
\n
\n
\n
\n
\n
\n
\n
\n\nYes\n\n
\n
\n\n0.2\n\n
\n
\n\n0.1\n\n
\n
\n\n0.1\n\n
\n
\n\n0.2\n\n
\n
\n\n0.3\n\n
\n
\n
\n
\n
\n
\n
\n\nDistmult\n\n
\n
\n\nNo\n\n
\n
\n\n0.1\n\n
\n
\n\n0.2\n\n
\n
\n\n0.1\n\n
\n
\n\n0.2\n\n
\n
\n\nN/A\n\n
\n
\n
\n
\n
\n
\n
\n
\n\nYes\n\n
\n
\n\n0.1\n\n
\n
\n\n0.1\n\n
\n
\n\n0.1\n\n
\n
\n\n0.2\n\n
\n
\n
\n
\n
\n\nWN18RR\n\n
\n
\n\n50000\n\n
\n
\n\n0.001\n\n
\n
\n\nTucker\n\n
\n
\n\nNo\n\n
\n
\n\n0.0\n\n
\n
\n\n0.0\n\n
\n
\n\n0.0\n\n
\n
\n\n0.3\n\n
\n
\n\n0.3\n\n
\n
\n
\n
\n
\n
\n
\n
\n\nYes\n\n
\n
\n\n0.0\n\n
\n
\n\n0.2\n\n
\n
\n\n0.1\n\n
\n
\n\n0.2\n\n
\n
\n\n0.3\n\n
\n
\n
\n
\n
\n
\n
\n\nDistmult\n\n
\n
\n\nNo\n\n
\n
\n\n0.0\n\n
\n
\n\n0.1\n\n
\n
\n\n0.0\n\n
\n
\n\n0.2\n\n
\n
\n\nN/A\n\n
\n
\n
\n
\n
\n
\n
\n
\n\nYes\n\n
\n
\n\n0.2\n\n
\n
\n\n0.2\n\n
\n
\n\n0.1\n\n
\n
\n\n0.2\n\n
\n
\n
\n\n
\n
",
+ "capture": "Table 6: Hyper-parameters for the results of the experiments."
+ }
+ },
+ "image_paths": {},
+ "validation": true,
+ "references": [
+ {
+ "1": {
+ "title": "Rotate: Knowledge graph embedding by relational rotation in complex space.",
+ "author": "Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang.",
+ "venue": "In the International Conference on Learning Representations, 2019.",
+ "url": null
+ }
+ },
+ {
+ "2": {
+ "title": "Translating embeddings for modeling multi-relational data.",
+ "author": "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko.",
+ "venue": "In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013.",
+ "url": null
+ }
+ },
+ {
+ "3": {
+ "title": "Quaternion knowledge graph embeddings.",
+ "author": "SHUAI ZHANG, Yi Tay, Lina Yao, and Qi Liu.",
+ "venue": "In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.",
+ "url": null
+ }
+ },
+ {
+ "4": {
+ "title": "Low-dimensional hyperbolic knowledge graph embeddings.",
+ "author": "Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher R\u00e9.",
+ "venue": "In the Annual Meeting of the Association for Computational Linguistics, pages 6901\u20136914. Association for Computational Linguistics, 2020.",
+ "url": null
+ }
+ },
+ {
+ "5": {
+ "title": "Convolutional 2d knowledge graph embeddings.",
+ "author": "Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel.",
+ "venue": "In the AAAI Conference on Artificial Intelligence, volume 32, 2018.",
+ "url": null
+ }
+ },
+ {
+ "6": {
+ "title": "Interacte: Improving convolution-based knowledge graph embeddings by increasing feature interactions.",
+ "author": "Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, Nilesh Agrawal, and Partha Talukdar.",
+ "venue": "In the AAAI Conference on Artificial Intelligence, volume 34, 2020a.",
+ "url": null
+ }
+ },
+ {
+ "7": {
+ "title": "Hypernetwork knowledge graph embeddings.",
+ "author": "Ivana Bala\u017eevi\u0107, Carl Allen, and Timothy M. Hospedales.",
+ "venue": "In the International Conference on Artificial Neural Networks, page 553\u2013565. Springer-Verlag, 2019.",
+ "url": null
+ }
+ },
+ {
+ "8": {
+ "title": "Coke: Contextualized knowledge graph embedding.",
+ "author": "Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, and Hua Wu.",
+ "venue": "2019a.",
+ "url": null
+ }
+ },
+ {
+ "9": {
+ "title": "Self-attention presents low-dimensional knowledge graph embeddings for link prediction.",
+ "author": "Peyman Baghershahi, Reshad Hosseini, and Hadi Moradi.",
+ "venue": "Knowledge-Based Systems, 260:110124, 2023.",
+ "url": null
+ }
+ },
+ {
+ "10": {
+ "title": "Modeling relational data with graph convolutional networks.",
+ "author": "Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling.",
+ "venue": "In The Semantic Web, pages 593\u2013607. Springer International Publishing, 2018.",
+ "url": null
+ }
+ },
+ {
+ "11": {
+ "title": "Composition-based multi-relational graph convolutional networks.",
+ "author": "Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar.",
+ "venue": "In the International Conference on Learning Representations, 2020b.",
+ "url": null
+ }
+ },
+ {
+ "12": {
+ "title": "Mixed-curvature multi-relational graph neural network for knowledge graph completion.",
+ "author": "Shen Wang, Xiaokai Wei, Cicero Nogueira Nogueira dos Santos, Zhiguo Wang, Ramesh Nallapati, Andrew Arnold, Bing Xiang, Philip S. Yu, and Isabel F. Cruz.",
+ "venue": "In Proceedings of the Web Conference 2021, page 1761\u20131771. Association for Computing Machinery, 2021a.",
+ "url": null
+ }
+ },
+ {
+ "13": {
+ "title": "Explainable GNN-based models over knowledge graphs.",
+ "author": "David Jaime Tena Cucala, Bernardo Cuenca Grau, Egor V. Kostylev, and Boris Motik.",
+ "venue": "In International Conference on Learning Representations, 2022.",
+ "url": null
+ }
+ },
+ {
+ "14": {
+ "title": "Canonical tensor decomposition for knowledge base completion.",
+ "author": "Timothee Lacroix, Nicolas Usunier, and Guillaume Obozinski.",
+ "venue": "In the International Conference on Machine Learning, pages 2863\u20132872, 2018.",
+ "url": null
+ }
+ },
+ {
+ "15": {
+ "title": "HittER: Hierarchical transformers for knowledge graph embeddings.",
+ "author": "Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, and Yangfeng Ji.",
+ "venue": "In the Conference on Empirical Methods in Natural Language Processing, pages 10395\u201310407. Association for Computational Linguistics, 2021.",
+ "url": null
+ }
+ },
+ {
+ "16": {
+ "title": "Indigo: Gnn-based inductive knowledge graph completion using pair-wise encoding.",
+ "author": "Shuwen Liu, Bernardo Grau, Ian Horrocks, and Egor Kostylev.",
+ "venue": "In Advances in Neural Information Processing Systems, volume 34, pages 2034\u20132045. Curran Associates, Inc., 2021a.",
+ "url": null
+ }
+ },
+ {
+ "17": {
+ "title": "End-to-end structure-aware convolutional networks for knowledge base completion.",
+ "author": "Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou.",
+ "venue": "In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. AAAI Press, 2019.",
+ "url": null
+ }
+ },
+ {
+ "18": {
+ "title": "Contextual parameter generation for knowledge graph link prediction.",
+ "author": "George Stoica, Otilia Stretcu, Emmanouil Antonios Platanios, Tom Mitchell, and Barnab\u00e1s P\u00f3czos.",
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence, 34:3000\u20133008, 2020.",
+ "url": null
+ }
+ },
+ {
+ "19": {
+ "title": "TuckER: Tensor factorization for knowledge graph completion.",
+ "author": "Ivana Balazevic, Carl Allen, and Timothy Hospedales.",
+ "venue": "In the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing, pages 5185\u20135194. Association for Computational Linguistics, 2019a.",
+ "url": null
+ }
+ },
+ {
+ "20": {
+ "title": "Complex embeddings for simple link prediction.",
+ "author": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel, \u00c9ric Gaussier, and Guillaume Bouchard.",
+ "venue": "In the International Conference on Machine Learning, page 2071\u20132080, 2016.",
+ "url": null
+ }
+ },
+ {
+ "21": {
+ "title": "Knowledge graph embedding by translating on hyperplanes.",
+ "author": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen.",
+ "venue": "In the AAAI Conference on Artificial Intelligence, volume 28, 2014.",
+ "url": null
+ }
+ },
+ {
+ "22": {
+ "title": "A three-way model for collective learning on multi-relational data.",
+ "author": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel.",
+ "venue": "In the International Conference on Machine Learning, page 809\u2013816, 2011.",
+ "url": null
+ }
+ },
+ {
+ "23": {
+ "title": "Embedding entities and relations for learning and inference in knowledge bases.",
+ "author": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng.",
+ "venue": "In the International Conference on Learning Representations, 2015.",
+ "url": null
+ }
+ },
+ {
+ "24": {
+ "title": "Simple embedding for link prediction in knowledge graphs.",
+ "author": "Seyed Mehran Kazemi and David Poole.",
+ "venue": "In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.",
+ "url": null
+ }
+ },
+ {
+ "25": {
+ "title": "Multi-relational poincar\u00e9 graph embeddings.",
+ "author": "Ivana Balazevic, Carl Allen, and Timothy Hospedales.",
+ "venue": "In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019b.",
+ "url": null
+ }
+ },
+ {
+ "26": {
+ "title": "Reasoning with neural tensor networks for knowledge base completion.",
+ "author": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng.",
+ "venue": "In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013.",
+ "url": null
+ }
+ },
+ {
+ "27": {
+ "title": "Eta prediction with graph neural networks in google maps.",
+ "author": "Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, Peter W. Battaglia, Vishal Gupta, Ang Li, Zhongwen Xu, Alvaro Sanchez-Gonzalez, Yujia Li, and Petar Velickovic.",
+ "venue": "In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, page 3767\u20133776. Association for Computing Machinery, 2021.",
+ "url": null
+ }
+ },
+ {
+ "28": {
+ "title": "Dynamic graph cnn for learning on point clouds.",
+ "author": "Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, and Justin M. Solomon.",
+ "venue": "ACM Trans. Graph., 38(5), 2019b.",
+ "url": null
+ }
+ },
+ {
+ "29": {
+ "title": "Modeling polypharmacy side effects with graph convolutional networks.",
+ "author": "Marinka Zitnik, Monica Agrawal, and Jure Leskovec.",
+ "venue": "Bioinformatics, 34(13):i457\u2013i466, 2018.",
+ "url": null
+ }
+ },
+ {
+ "30": {
+ "title": "Semi-supervised classification with graph convolutional networks.",
+ "author": "Thomas N. Kipf and Max Welling.",
+ "venue": "In International Conference on Learning Representations, 2017.",
+ "url": null
+ }
+ },
+ {
+ "31": {
+ "title": "Simplifying graph convolutional networks as matrix factorization.",
+ "author": "Qiang Liu, Haoli Zhang, and Zhaocheng Liu.",
+ "venue": "In Web and Big Data: 5th International Joint Conference, APWeb-WAIM, page 35\u201343. Springer-Verlag, 2021b.",
+ "url": null
+ }
+ },
+ {
+ "32": {
+ "title": "Graph attention networks.",
+ "author": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio.",
+ "venue": "In International Conference on Learning Representations, 2018.",
+ "url": null
+ }
+ },
+ {
+ "33": {
+ "title": "Inductive representation learning on large graphs.",
+ "author": "Will Hamilton, Zhitao Ying, and Jure Leskovec.",
+ "venue": "In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.",
+ "url": null
+ }
+ },
+ {
+ "34": {
+ "title": "Position-aware graph neural networks.",
+ "author": "Jiaxuan You, Rex Ying, and Jure Leskovec.",
+ "venue": "In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 7134\u20137143, 2019.",
+ "url": null
+ }
+ },
+ {
+ "35": {
+ "title": "Modeling heterogeneous hierarchies with relation-specific hyperbolic cones.",
+ "author": "Yushi Bai, Zhitao Ying, Hongyu Ren, and Jure Leskovec.",
+ "venue": "In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 12316\u201312327. Curran Associates, Inc., 2021.",
+ "url": null
+ }
+ },
+ {
+ "36": {
+ "title": "A2N: Attending to neighbors for knowledge graph inference.",
+ "author": "Trapit Bansal, Da-Cheng Juan, Sujith Ravi, and Andrew McCallum.",
+ "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4387\u20134392. Association for Computational Linguistics, 2019.",
+ "url": null
+ }
+ },
+ {
+ "37": {
+ "title": "Multi-hop attention graph neural networks.",
+ "author": "Guangtao Wang, Rex Ying, Jing Huang, and Jure Leskovec.",
+ "venue": "In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 3089\u20133096. International Joint Conferences on Artificial Intelligence Organization, 2021b.",
+ "url": null
+ }
+ },
+ {
+ "38": {
+ "title": "Knowledge transfer for out-of-knowledge-base entities: A graph neural network approach.",
+ "author": "Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto.",
+ "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence, page 1802\u20131808. AAAI Press, 2017.",
+ "url": null
+ }
+ },
+ {
+ "39": {
+ "title": "Inductive relation prediction by subgraph reasoning.",
+ "author": "Komal K. Teru, Etienne G. Denis, and William L. Hamilton.",
+ "venue": "In Proceedings of the 37th International Conference on Machine Learning. JMLR.org, 2020.",
+ "url": null
+ }
+ },
+ {
+ "40": {
+ "title": "Some mathematical notes on three-mode factor analysis.",
+ "author": "Ledyard R Tucker.",
+ "venue": "In Psychometrika, volume 31, pages 279\u2013311, 1966.",
+ "url": null
+ }
+ },
+ {
+ "41": {
+ "title": "Towards a standardized notation and terminology in multiway analysis.",
+ "author": "Henk A. L. Kiers.",
+ "venue": "Journal of Chemometrics, 14(3):105\u2013122, 2000.",
+ "url": null
+ }
+ },
+ {
+ "42": {
+ "title": "The expression of a tensor or a polyadic as a sum of products.",
+ "author": "Frank L. Hitchcock.",
+ "venue": "In Journal of Mathematics and Physics, volume 6, pages 164\u2013189, 1927.",
+ "url": null
+ }
+ },
+ {
+ "43": {
+ "title": "Representation learning with contrastive predictive coding, 2019.",
+ "author": "Aaron van den Oord, Yazhe Li, and Oriol Vinyals.",
+ "venue": null,
+ "url": null
+ }
+ },
+ {
+ "44": {
+ "title": "A simple framework for contrastive learning of visual representations.",
+ "author": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.",
+ "venue": "In Proceedings of the 37th International Conference on Machine Learning, ICML\u201920. JMLR.org, 2020.",
+ "url": null
+ }
+ },
+ {
+ "45": {
+ "title": "Representing text for joint embedding of text and knowledge bases.",
+ "author": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon.",
+ "venue": "In the Conference on Empirical Methods in Natural Language Processing, pages 1499\u20131509. Association for Computational Linguistics, 2015.",
+ "url": null
+ }
+ },
+ {
+ "46": {
+ "title": "A re-evaluation of knowledge graph completion methods.",
+ "author": "Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar, and Yiming Yang.",
+ "venue": "In the Annual Meeting of the Association for Computational Linguistics, pages 5516\u20135522. Association for Computational Linguistics, 2020.",
+ "url": null
+ }
+ },
+ {
+ "47": {
+ "title": "Pytorch: An imperative style, high-performance deep learning library.",
+ "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.",
+ "venue": "32, 2019.",
+ "url": null
+ }
+ },
+ {
+ "48": {
+ "title": "Adam: A method for stochastic optimization.",
+ "author": "Diederik P. Kingma and Jimmy Ba.",
+ "venue": "In the International Conference on Learning Representations, 2015.",
+ "url": null
+ }
+ }
+ ],
+ "url": "http://arxiv.org/html/2212.05581v4"
+}
\ No newline at end of file