papers / 20240721 /2301.11290v3.json
yilunzhao's picture
Add files using upload-large-folder tool
6df2d21 verified
raw
history blame
34 kB
{
"title": "Graph Encoder Ensemble for Simultaneous Vertex Embedding and Community Detection",
"abstract": "In this paper, we introduce a novel and computationally efficient method for vertex embedding, community detection, and community size determination. Our approach leverages a normalized one-hot graph encoder and a rank-based cluster size measure. Through extensive simulations, we demonstrate the excellent numerical performance of our proposed graph encoder ensemble algorithm.",
"sections": [
{
"section_id": "1",
"parent_section_id": null,
"section_name": "1. Introduction",
"text": "Graph data represents pairwise relationships between vertices through a collection of vertices and edges. Typically, a graph (or network) is represented by an adjacency matrix of size , where denotes the edge weight between the th and th vertices. Alternatively, the graph can be stored in an edgelist of size , with the first two columns indicating the vertex indices of each edge and the last column representing the edge weight.\nCommunity detection, also known as vertex clustering or graph partitioning, is a fundamental problem in graph analysis (Girvan and Newman, 2002 ###reference_b9###; Newman, 2004 ###reference_b14###; Fortunato, 2010 ###reference_b7###; Karrer and Newman, 2011 ###reference_b11###). The primary objective is to identify natural groups of vertices where intra-group connections are stronger than inter-group connections. Over the years, various approaches have been proposed, including modularity-based methods (Blondel et al., 2008 ###reference_b3###; Traag\net al., 2019 ###reference_b23###), spectral-based methods (Rohe\net al., 2011 ###reference_b16###; Sussman\net al., 2012 ###reference_b22###), and likelihood-based techniques (Gao\net al., 2018 ###reference_b8###; Abbe, 2018 ###reference_b2###), among others.\nSpectral-based and likelihood-based methods are extensively studied in the statistics community, but they tend to be computationally slow for large graphs. On the other hand, modularity-based methods are faster and widely used in practice, but they lack theoretical investigations and only provide community labels without vertex embedding. Moreover, determining the appropriate community size poses a challenge for any method and is often addressed in an ad-hoc manner or assumed to be known. Therefore, a desirable approach is to develop a method that can achieve community detection, vertex representation, and community size determination under a unified framework.\nIn this paper, we propose a graph encoder ensemble algorithm that simultaneously fulfills all these objectives. Our algorithm leverages a normalized one-hot graph encoder (Shen\net al., 2023c ###reference_b20###), ensemble learning (Maclin and Opitz, 1999 ###reference_b13###; Breiman, 2001 ###reference_b4###), k-means clustering (Lloyd, 1982 ###reference_b12###; Forgy, 1965 ###reference_b6###), and a novel rank-based cluster size measure called the minimal rank index. The proposed algorithm exhibits linear running time and demonstrates excellent numerical performance. The code for the algorithm is available on GitHub111https://github.com/cshen6/GraphEmd ###reference_###."
},
{
"section_id": "2",
"parent_section_id": null,
"section_name": "2. Methods",
"text": "We begin by introducing the one-hot graph encoder embedding from (Shen\net al., 2023c ###reference_b20###), known for its computational efficiency and theoretical guarantees under random graph models. This embedding forms the foundation of our proposed ensemble method, outlined in Algorithm 1 ###reference_###. The ensemble algorithm incorporates crucial enhancements, including normalization, the minimal rank index, and ensemble embedding, which are elaborated in the subsequent subsections."
},
{
"section_id": "2.1",
"parent_section_id": "2",
"section_name": "2.1. Prerequisite",
"text": "Given the graph adjacency matrix and a label vector , we define as the number of observations per class, where\nfor . We construct the one-hot encoding matrix on , then normalize it by the number of observations per-class. Specifically, for each vertex , we set\nif and only if , and otherwise. The graph encoder embedding is then obtained by performing a simple matrix multiplication:\nEach row represents a -dimensional Euclidean representation of vertex . The computational advantage of the graph encoder embedding lies in the matrix multiplications, which can be efficiently implemented by iterating over the edge list only once, without the need for the adjacency matrix (Shen\net al., 2023c ###reference_b20###). In Algorithm 1 ###reference_###, we denote the above steps as"
},
{
"section_id": "2.2",
"parent_section_id": "2",
"section_name": "2.2. Main Algorithm",
"text": "The proposed ensemble method is described in detail in Algorithm 1 ###reference_###. It can be applied to binary or weighted graphs, as well as directed or undirected graphs. Throughout this paper, we set the number of random replicates , the maximum number of iterations , and the clustering range is determined based on the specific experiment.\nIn the pseudo-code, the normalization step is represented by , which normalizes each vertex representation to have unit norm (see Section 2.3 ###reference_### for more details). Additionally, given an embedding and a label vector , the minimal rank index is denoted as , which measures the quality of clustering with a lower value indicating better clustering (details in Section 2.4 ###reference_###). The k-means clustering step is denoted as , and the adjusted Rand index is denoted as , which measures the similarity between two label vectors of the same size. The ARI is a popular matching metric that ranges from to , with a larger positive value indicating better match quality and a value of representing a perfect match (Rand, 1971 ###reference_b15###)."
},
{
"section_id": "2.3",
"parent_section_id": "2",
"section_name": "2.3. Why Normalization",
"text": "The normalization step in Algorithm 1 ###reference_### scales each vertex embedding to unit norm. Specifically, for each vertex ,\nif . The normalization step plays a crucial role in achieving improved clustering results, as demonstrated in Figure 1 ###reference_### using a sparse random graph model with two communities. The normalized embedding is represented on a unit sphere, effectively capturing the connectivity information while mitigating the influence of vertex degrees. In contrast, the un-normalized embedding is significantly affected by the original vertex degrees, resulting in vertices from the same community being widely dispersed. This distinction bears resemblance to the two-truth phenomenon observed in graph adjacency and graph Laplacian, where the Laplacian spectral embedding (LSE) can be seen as a degree-normalized version of the adjacency spectral embedding (ASE). The LSE typically performs better on sparse graphs. Further numerical evaluations on the normalization effect can be found in Section 3.2 ###reference_### and Table 1 ###reference_###.\n###figure_1###"
},
{
"section_id": "2.4",
"parent_section_id": "2",
"section_name": "2.4. The Minimal Rank Index",
"text": "We introduce a new rank-based measure called the minimal rank index (MRI) to assess the quality of clustering. This measure plays a crucial role in Algorithm 1 ###reference_### as it enables the comparison of multiple embeddings generated from different initializations and community sizes.\nGiven the cluster index of vertex , the Euclidean distance function , and the mean of the th cluster denoted as\nthe minimal rank index is computed as:\nThe MRI measures how often the vertex embedding is not closest to its corresponding cluster mean. A smaller MRI value indicates better clustering quality, with MRI equal to indicating that every vertex is closest to its cluster mean. In the context of k-means clustering, MRI is non-zero when the k-means algorithm fails to converge.\nIn comparison to common cluster size measures such as Silhouette Score, Davies-Bouldin index, Variance Ratio Criterion, and Gap criterion (Rousseeuw, 1987 ###reference_b17###; Davies and\nBouldin, 1989 ###reference_b5###), MRI is rank-based rather than based on actual distances. These other measures compute ratios of within-cluster distances to between-cluster distances. If any of these measures were used in Algorithm 1 ###reference_### instead of MRI, the choice of cluster size would be biased towards the smallest possible value. This is due to the incremental nature of graph encoder embedding in Algorithm 1 ###reference_###, where the embedding dimension is equal to the community size . Consequently, within-cluster distances become smaller for smaller values of , resulting in a bias towards the smallest when using actual distance."
},
{
"section_id": "2.5",
"parent_section_id": "2",
"section_name": "2.5. Ensemble Embedding and Cluster Size Determination",
"text": "Ensemble learning is utilized in Algorithm 1 ###reference_### to improve learning performance and reduce variance by employing multiple models. The approach can be summarized as follows: for each value of in the cluster range, we generate a set of vertex embeddings and community labels using random label initialization. The model with the smallest MRI is selected as the best model. In cases where multiple models have the same smallest MRI, the average embedding is used.\nAdditionally, among all possible choices of cluster size , the best embedding with the smallest MRI is selected. If there are multiple embeddings with the same smallest MRI, the one with the largest is chosen. For instance, if the MRI values are for , the graph encoder ensemble would select ."
},
{
"section_id": "2.6",
"parent_section_id": "2",
"section_name": "2.6. Computational Complexity Analysis",
"text": "Algorithm 1 ###reference_### comprises several steps, including one-hot graph encoder embedding, k-means clustering, MRI computation, and ensembles. Let be the number of vertices and be the number of edges. At any fixed , the one-hot graph encoder embedding takes , k-means takes , and the MRI computation takes . Therefore, the overall time complexity of Algorithm 1 ###reference_### is , which is linear with respect to the number of vertices and edges. The storage requirement is also . In practical terms, the graph encoder ensemble algorithm exhibits remarkable efficiency and scalability. Testing on simulated graphs with default parameters and , it takes less than 3 minutes to process 1 million edges and less than 20 minutes for 10 million edges."
},
{
"section_id": "3",
"parent_section_id": null,
"section_name": "3. Results",
"text": "In this section, we conduct extensive numerical experiments to demonstrate the advantages of the graph encoder ensemble, as well as the individual benefits of normalization, ensemble, and MRI. We compare these approaches against benchmarks including the algorithm without normalization, without ensemble, with MRI replaced, and using adjacency/Laplacian spectral embedding. The performance is evaluated using the adjusted Rand index (ARI), which measures the degree of agreement between the estimated communities and the ground-truth labels."
},
{
"section_id": "3.1",
"parent_section_id": "3",
"section_name": "3.1. Simulation Set-up",
"text": "The stochastic block model (SBM) is a widely used random graph model for studying community structure (Holland\net al., 1983 ###reference_b10###; Snijders and\nNowicki, 1997 ###reference_b21###). Each vertex is associated with a class label . The class label may be fixed a-priori, or generated by a categorical distribution with prior probability . Then a block probability matrix specifies the edge probability between a vertex from class and a vertex from class . For any ,\nThe degree-corrected stochastic block model (DC-SBM) (Zhao\net al., 2012 ###reference_b24###) is a generalization of SBM to better model the sparsity of real graphs. Everything else being the same as SBM, each vertex has an additional degree parameter , and the adjacency matrix is generated by\nIn our simulations, we consider three DC-SBM models with increasing community sizes. In all models, the degrees are generated randomly by .\nSimulation 1: , , equally likely, and the block probability matrix is\nSimulation 2: , , with prior probability , and the block probability matrix is\nSimulation 3: , , with equally likely prior probability, and the block probability matrix satisfies and for all and ."
},
{
"section_id": "3.2",
"parent_section_id": "3",
"section_name": "3.2. Normalization Comparison",
"text": "Table 1 ###reference_### provides clear evidence of the superior clustering performance achieved by the normalized algorithm compared to the un-normalized algorithm. To isolate the impact of normalization, we set and assume the cluster size is known. The observed improvement aligns with the phenomenon observed between adjacency spectral embedding (ASE) and Laplacian spectral embedding (LSE), where LSE, being a normalized version of ASE, consistently outperforms ASE."
},
{
"section_id": "3.3",
"parent_section_id": "3",
"section_name": "3.3. Ensemble Comparison",
"text": "In this simulation, we assume a known cluster size and conduct Monte Carlo replicates to compare the performance of the ensemble algorithm () with the no-ensemble version (). The results in Table 2 ###reference_### clearly demonstrate the superiority of the ensemble algorithm: it achieves higher mean ARI and significantly reduces the variance compared to the no-ensemble version. Based on our empirical observations, the default choice of yields satisfactory results across our experiments. Additionally, if the graph size is sufficiently large and the community structure is well-separated, using a smaller value of or even is sufficient. This is evident in simulation 1 of Table 2 ###reference_###."
},
{
"section_id": "3.4",
"parent_section_id": "3",
"section_name": "3.4. Cluster Size Estimation",
"text": "In this analysis, we explore the performance of the algorithm in estimating the community size. Instead of using the ground-truth size, we consider a range of potential sizes from to , and the results are presented in Figure 2 ###reference_###.\nThese findings provide insights into the performance of the algorithm in accurately estimating the community size and highlight the importance of the MRI measure in achieving accurate size determination.\n###figure_2###"
},
{
"section_id": "4",
"parent_section_id": null,
"section_name": "4. Conclusion",
"text": "This paper introduces the graph encoder ensemble, which achieves graph embedding, community detection, and community size determination in a unified framework. Its main advantages include ease of implementation, computational efficiency, and excellent performance in community detection and community size selection. Several potential future directions include exploring mathematical proofs for asymptotic clustering optimality, investigating theoretical properties of MRI, and extending the method to dynamic and multi-modal graphs (Shen\net al., 2023b ###reference_b19###; Shen et al., 2023a ###reference_b18###)."
}
],
"appendix": [],
"tables": {
"1": {
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.12.13.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" colspan=\"5\" id=\"S3.T1.12.13.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">ARI</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.14.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T1.12.14.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE no norm</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">ASE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.12.14.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">LSE</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T1.4.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T1.8.8.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_rr ltx_border_t\" id=\"S3.T1.12.12.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.11.11.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.12.12.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>This table demonstrates the advantage of normalization in the graph encoder ensemble. The \u201dGEE\u201d column refers to the graph encoder ensemble using Algorithm\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2301.11290v3#alg1\" title=\"Algorithm 1 \u2023 2.2. Main Algorithm \u2023 2. Methods \u2023 Graph Encoder Ensemble for Simultaneous Vertex Embedding and Community Detection\"><span class=\"ltx_text ltx_ref_tag\">1</span></a>, while \u201dGEE no norm\u201d indicates that normalization is not applied. The reported results are averages obtained from Monte Carlo replicates.</figcaption>\n</figure>",
"capture": "Table 1. This table demonstrates the advantage of normalization in the graph encoder ensemble. The \u201dGEE\u201d column refers to the graph encoder ensemble using Algorithm\u00a01, while \u201dGEE no norm\u201d indicates that normalization is not applied. The reported results are averages obtained from Monte Carlo replicates."
},
"2": {
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.7.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" colspan=\"3\" id=\"S3.T2.7.8.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Average ARI + std</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">GEE ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T2.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.2.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.3.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T2.5.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.4.4.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.5.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_rr ltx_border_t\" id=\"S3.T2.7.7.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Simulation 3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.6.6.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.7.7.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>This table assesses the advantage of the ensemble approach in the graph encoder ensemble. The reported results include the mean and standard deviation of the Adjusted Rand Index (ARI) obtained from Monte Carlo replicates.</figcaption>\n</figure>",
"capture": "Table 2. This table assesses the advantage of the ensemble approach in the graph encoder ensemble. The reported results include the mean and standard deviation of the Adjusted Rand Index (ARI) obtained from Monte Carlo replicates."
}
},
"image_paths": {
"1": {
"figure_path": "2301.11290v3_figure_1.png",
"caption": "Figure 1. This figure visually demonstrates the effect of normalization. The left panel displays the adjacency heatmap of a simulated sparse graph using simulation 1 in Section 3.1. The center panel shows the resulting embedding without the normalization step, while the right panel displays the resulting embedding with normalization. The blue and red dots represent the true community labels of each vertex.",
"url": "http://arxiv.org/html/2301.11290v3/x1.png"
},
"2": {
"figure_path": "2301.11290v3_figure_2.png",
"caption": "Figure 2. This figure presents the results of cluster size estimation using the graph encoder ensemble. The estimation accuracy and the performance of different size measures are evaluated for various simulations and graph sizes. For each simulation and each graph size, we independently generate 100100100100 graphs, and run the ensemble algorithm to estimate the community size. The left panel of the figure illustrates the estimation accuracy as the graph size increases. The estimation accuracy represents the proportion of cases where the algorithm correctly chooses the community size. As the graph size increases, the estimation accuracy gradually improves, reaching a perfect estimation accuracy of 1111 for all simulations. The center panel focuses on simulation 3 at n=5000\ud835\udc5b5000n=5000italic_n = 5000. The MRI calculates K^=5^\ud835\udc3e5\\hat{K}=5over^ start_ARG italic_K end_ARG = 5 as the estimated community size, which matches the ground-truth size. In the right panel, the average Silhouette Score is computed as an alternative size measure, which is biased towards smaller community sizes and chooses K^S\u2062S=2subscript^\ud835\udc3e\ud835\udc46\ud835\udc462\\hat{K}_{SS}=2over^ start_ARG italic_K end_ARG start_POSTSUBSCRIPT italic_S italic_S end_POSTSUBSCRIPT = 2, resulting in a different estimation compared to the ground-truth size.",
"url": "http://arxiv.org/html/2301.11290v3/x2.png"
}
},
"validation": true,
"references": [
{
"1": {
"title": "Community Detection and Stochastic Block Models:\nRecent Developments.",
"author": "Emmanuel Abbe.\n2018.",
"venue": "Journal of Machine Learning Research\n18, 177 (2018),\n1\u201386.",
"url": null
}
},
{
"2": {
"title": "Fast unfolding of communities in large networks.",
"author": "V. D. Blondel, J. L.\nGuillaume, R. Lambiotte, and E.\nLefebvre. 2008.",
"venue": "Journal of Statistical Mechanics: Theory and\nExperiment 10008 (2008),\n6.",
"url": null
}
},
{
"3": {
"title": "Random Forests.",
"author": "L. Breiman.\n2001.",
"venue": "Machine Learning 4,\n1 (October 2001),\n5\u201332.",
"url": null
}
},
{
"4": {
"title": "A Cluster Separation Measure.",
"author": "David L. Davies and\nDonald W. Bouldin. 1989.",
"venue": "IEEE Transactions on Pattern Analysis and\nMachine Intelligence 1, 2\n(1989), 224\u2013227.",
"url": null
}
},
{
"5": {
"title": "Cluster analysis of multivariate data: efficiency\nversus interpretability of classifications.",
"author": "Edward W. Forgy.\n1965.",
"venue": "Biometrics 21,\n3 (1965), 768\u2013769.",
"url": null
}
},
{
"6": {
"title": "Community detection in graphs.",
"author": "Santo Fortunato.\n2010.",
"venue": "Physics Reports 486,\n3\u20135 (2010), 75\u2013174.",
"url": null
}
},
{
"7": {
"title": "Community detection in degree-corrected block\nmodels.",
"author": "Chao Gao, Zongming Ma,\nAnderson Y. Zhang, and Harrison H.\nZhou. 2018.",
"venue": "Annals of Statistics 46,\n5 (2018), 2153\u20132185.",
"url": null
}
},
{
"8": {
"title": "Community Structure in Social and Biological\nNetworks.",
"author": "M. Girvan and M. E. J.\nNewman. 2002.",
"venue": "Proceedings of National Academy of Science\n99, 12 (2002),\n7821\u20137826.",
"url": null
}
},
{
"9": {
"title": "Stochastic Blockmodels: First Steps.",
"author": "P. Holland, K. Laskey,\nand S. Leinhardt. 1983.",
"venue": "Social Networks 5,\n2 (1983), 109\u2013137.",
"url": null
}
},
{
"10": {
"title": "Stochastic blockmodels and community structure in\nnetworks.",
"author": "B. Karrer and M. E. J.\nNewman. 2011.",
"venue": "Physical Review E 83\n(2011), 016107.",
"url": null
}
},
{
"11": {
"title": "Least squares quantization in PCM.",
"author": "Stuart P. Lloyd.\n1982.",
"venue": "IEEE Transactions on Information Theory\n28, 2 (1982),\n129\u2013137.",
"url": null
}
},
{
"12": {
"title": "Popular Ensemble Methods: An Empirical Study.",
"author": "R. Maclin and D.\nOpitz. 1999.",
"venue": "Journal Of Artificial Intelligence Research\n11 (1999), 169\u2013198.",
"url": null
}
},
{
"13": {
"title": "Detecting community structure in networks.",
"author": "M. E. J. Newman.\n2004.",
"venue": "European Physical Journal B\n38, 2 (2004),\n321\u2013330.",
"url": null
}
},
{
"14": {
"title": "Objective criteria for the evaluation of clustering\nmethods.",
"author": "W. M. Rand.\n1971.",
"venue": "J. Amer. Statist. Assoc.\n66, 336 (1971),\n846\u2013850.",
"url": null
}
},
{
"15": {
"title": "Spectral Clustering and the High-Dimensional\nStochastic Blockmodel.",
"author": "K. Rohe, S. Chatterjee,\nand B. Yu. 2011.",
"venue": "Annals of Statistics 39,\n4 (2011), 1878\u20131915.",
"url": null
}
},
{
"16": {
"title": "Silhouettes: a Graphical Aid to the Interpretation\nand Validation of Cluster Analysis.",
"author": "Peter J. Rousseeuw.\n1987.",
"venue": "Computational and Applied Mathematics\n20 (1987), 53\u201365.",
"url": null
}
},
{
"17": {
"title": "Discovering Communication Pattern Shifts in\nLarge-Scale Labeled Networks using Encoder Embedding and Vertex Dynamics.",
"author": "C. Shen, J. Larson,\nH. Trinh, X. Qin, Y.\nPark, and C. E. Priebe.\n2023a.",
"venue": "https://arxiv.org/abs/2305.02381\n(2023).",
"url": null
}
},
{
"18": {
"title": "Synergistic Graph Fusion via Encoder Embedding.",
"author": "C. Shen, C. E. Priebe,\nJ. Larson, and H. Trinh.\n2023b.",
"venue": "https://arxiv.org/abs/2303.18051\n(2023).",
"url": null
}
},
{
"19": {
"title": "One-Hot Graph Encoder Embedding.",
"author": "C. Shen, Q. Wang, and\nC. E. Priebe. 2023c.",
"venue": "IEEE Transactions on Pattern Analysis and\nMachine Intelligence 45, 6\n(2023), 7933 \u2013 7938.",
"url": null
}
},
{
"20": {
"title": "Estimation and Prediction for Stochastic\nBlockmodels for Graphs with Latent Block Structure.",
"author": "T. Snijders and K.\nNowicki. 1997.",
"venue": "Journal of Classification\n14, 1 (1997),\n75\u2013100.",
"url": null
}
},
{
"21": {
"title": "A Consistent Adjacency Spectral Embedding for\nStochastic Blockmodel Graphs.",
"author": "D. Sussman, M. Tang,\nD. Fishkind, and C. Priebe.\n2012.",
"venue": "J. Amer. Statist. Assoc.\n107, 499 (2012),\n1119\u20131128.",
"url": null
}
},
{
"22": {
"title": "From Louvain to Leiden: guaranteeing well-connected\ncommunities.",
"author": "V. A. Traag, L. Waltman,\nand N. J. van Eck. 2019.",
"venue": "Scientific Reports 9\n(2019), 5233.",
"url": null
}
},
{
"23": {
"title": "Consistency of Community Detection in Networks\nunder Degree-Corrected Stochastic Block Models.",
"author": "Y. Zhao, E. Levina, and\nJ. Zhu. 2012.",
"venue": "Annals of Statistics 40,\n4 (2012), 2266\u20132292.",
"url": null
}
}
],
"url": "http://arxiv.org/html/2301.11290v3"
}