citing_id
stringlengths
9
16
cited_id
stringlengths
9
16
section_title
stringlengths
0
2.25k
citation
stringlengths
52
442
text_before_citation
sequence
text_after_citation
sequence
keywords
sequence
citation_intent
stringclasses
3 values
citing_paper_content
dict
cited_paper_content
dict
1407.8463
1206.0197
Fig. 1.
We should point out that a related result in #REFR shows that using a similar idea of decoding multiple integer sums, the sum capacity of the Gaussian MAC can be achieved within a 0018-9448 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
[ "If the SNR exceeds 1.5, a line segment on the capacity boundary is achievable.", "As SNR increases, the end points of the line segment approach the corner points, and the whole capacity region is achievable as soon as the SNR of both users is larger than 1 + √ 2.", "We point out that the decoder used in our scheme is a single-user decoder since it mainly performs lattice quantizations on the received signal, in contrast to joint decoding where the decoder needs the complete information of the codebooks of the two users.", "Hence this novel approach allows us to achieve rate pairs in the capacity region using only single-user decoders (with interference cancellation), while time-sharing or rate splitting are not needed.", "This feature of the proposed coding scheme could be of interest for practical considerations." ]
[ "See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.", "constant gap.", "Furthermore, it is also shown in #OTHEREFR that under certain conditions, some isolated (non-corner) points of the capacity region can be attained.", "To prove these results, the authors use fixed lattices which are independent of channel gains.", "In this work, we close these gaps by showing that if the lattices are properly scaled in accordance with the channel gains, the full capacity region can be attained. Moreover, this paper considers exclusively the Gaussian MAC." ]
[ "Gaussian MAC", "sum capacity" ]
background
{ "title": "Gaussian Multiple Access via Compute-and-Forward", "abstract": "Abstract-Lattice codes used under the compute-and-forward paradigm suggest an alternative strategy for the standard Gaussian multiple-access channel (MAC): the receiver successively decodes the integer linear combinations of the messages until it can invert and recover all messages. In this paper, a multiple-access technique called compute-forward multiple access (CFMA) is proposed and analyzed. For the two-user MAC, it is shown that without time-sharing, the entire capacity region can be attained using CFMA with a single-user decoder as soon as the signal-to-noise ratios are above 1+ √ 2. A partial analysis is given for more than two users. Finally, the strategy is extended to the so-called dirty MAC, where two interfering signals are known non-causally to the two transmitters in a distributed fashion. Our scheme extends the previously known results and gives new achievable rate regions." }
{ "title": "The Approximate Sum Capacity of the Symmetric Gaussian K-User Interference Channel", "abstract": "Abstract-Interference alignment has emerged as a powerful tool in the analysis of multi-user networks. Despite considerable recent progress, the capacity region of the Gaussian K-user interference channel is still unknown in general, in part due to the challenges associated with alignment on the signal scale using lattice codes. This paper develops a new framework for lattice interference alignment, based on the compute-and-forward approach. Within this framework, each receiver decodes by first recovering two or more linear combinations of the transmitted codewords with integer-valued coefficients and then solving these linear combinations for its desired codeword. For the special case of symmetric channel gains, this framework is used to derive the approximate sum capacity of the Gaussian interference channel, up to an explicitly defined outage set of the channel gains. The key contributions are the capacity lower bounds for the weak through strong interference regimes, where each receiver should jointly decode its own codeword along with part of the interfering codewords. As part of the analysis, it is shown that decoding K linear combinations of the codewords can approach the sum capacity of the K-user Gaussian multiple-access channel up to a gap of no more than K 2 log K bits." }
1504.01690
1206.0197
I. INTRODUCTION
Also, as shown in #REFR , this technique can be used to approach the multiple-access sum capacity within a constant gap.
[ "Recent work has studied the problem of recovering multiple linear combinations at a single receiver. In particular, Feng et al.", "#OTHEREFR linked this problem to the shortest independent vector problem #OTHEREFR and a sequence of papers has demonstrated its value for integer-forcing MIMO decoding #OTHEREFR , #OTHEREFR - #OTHEREFR as well as for integer-forcing interference alignment #OTHEREFR - #OTHEREFR .", "However, the original compute-and-forward framework does not capture some of the subtleties that arise when decoding multiple linear combinations. For instance, as shown by Ordentlich et al.", "#OTHEREFR , after one or more linear combinations have been decoded, they can be used as side information to eliminate some of the codewords from subsequent linear combinations.", "This algebraic successive cancellation technique eliminates some of the rate constraints placed on codewords, i.e., it enlarges the rate region." ]
[ "Additionally, recent work by the first author #OTHEREFR as well as Ordentlich et al.", "#OTHEREFR revealed that decoded linear combinations can be used to infer the corresponding integer-linear combination of channel inputs, which can in turn be used to reduce the effective noise encountered in subsequent decoding steps.", "As argued in #OTHEREFR , this successive computation technique can reach the exact multiple-access sum capacity.", "Our expanded compute-and-forward framework is designed with multiple linear combinations in mind.", "Specifically, we use a computation rate region to capture the dependencies between rate constraints." ]
[ "multiple-access sum capacity" ]
method
{ "title": "Expanding the Compute-and-Forward Framework: Unequal Powers, Signal Levels, and Multiple Linear Combinations", "abstract": "Abstract-The compute-and-forward framework permits each receiver in a Gaussian network to directly decode a linear combination of the transmitted messages. The resulting linear combinations can then be employed as an end-to-end communication strategy for relaying, interference alignment, and other applications. Recent efforts have demonstrated the advantages of employing unequal powers at the transmitters and decoding more than one linear combination at each receiver. However, neither of these techniques fit naturally within the original formulation of compute-and-forward. This paper proposes an expanded compute-and-forward framework that incorporates both of these possibilities and permits an intuitive interpretation in terms of signal levels. Within this framework, recent achievability and optimality results are unified and generalized." }
{ "title": "The Approximate Sum Capacity of the Symmetric Gaussian K-User Interference Channel", "abstract": "Abstract-Interference alignment has emerged as a powerful tool in the analysis of multi-user networks. Despite considerable recent progress, the capacity region of the Gaussian K-user interference channel is still unknown in general, in part due to the challenges associated with alignment on the signal scale using lattice codes. This paper develops a new framework for lattice interference alignment, based on the compute-and-forward approach. Within this framework, each receiver decodes by first recovering two or more linear combinations of the transmitted codewords with integer-valued coefficients and then solving these linear combinations for its desired codeword. For the special case of symmetric channel gains, this framework is used to derive the approximate sum capacity of the Gaussian interference channel, up to an explicitly defined outage set of the channel gains. The key contributions are the capacity lower bounds for the weak through strong interference regimes, where each receiver should jointly decode its own codeword along with part of the interfering codewords. As part of the analysis, it is shown that decoding K linear combinations of the codewords can approach the sum capacity of the K-user Gaussian multiple-access channel up to a gap of no more than K 2 log K bits." }
1504.01690
1206.0197
Encoding:
The second ingredient in our decoding process is algebraic successive cancellation as proposed by #REFR .
[ "the receiver can obtain the integer-linear combination of the channel inputs, s T = a T X. Proof: By the distributive law,", "Therefore,", "where (i) uses the fact that z eff ∈ V C as well as #OTHEREFR .", "Thus, ifμ 1 = µ 1 , Lemma 11 will allow us to argue that s T 1 = a T 1 X, which can be used for successive computation as proposed by #OTHEREFR , i.e., creating better effective channels for subsequent linear combinations.", "In general, at the mth decoding step, we will have access to A m−1 X where A m−1 is the submatrix consisting of the first m − 1 rows of A, assuming all previous decoding steps are correct." ]
[ "The main idea is that, at decoding step m, it is possible to use linear combinations from steps 1 through m − 1, to cancel out some of the codewords participating in the integer-linear combination µ m without changing the effective noise variance.", "This in turn reduces the noise tolerance constraints placed on the fine lattices associated with the codewords and increases the overall rate region.", "Before we proceed, we need the following lemma that connects the definition of an admissible mapping to the existence of a matrix over Z p that can be used for algebraic successive cancellation.", "Lemma 12: Let I be an admissible mapping for A ∈ Z L×L .", "For prime p large enough, there exists a lower unitriangular matrixL ∈ Z L×L p such that, the (m, ℓ)th entry of A = [LA] mod p is equal to zero (i.e.,ā m,ℓ = 0) for all (m, ℓ) = I. Furthermore,L has a lower triangular inversē" ]
[ "decoding process" ]
method
{ "title": "Expanding the Compute-and-Forward Framework: Unequal Powers, Signal Levels, and Multiple Linear Combinations", "abstract": "Abstract-The compute-and-forward framework permits each receiver in a Gaussian network to directly decode a linear combination of the transmitted messages. The resulting linear combinations can then be employed as an end-to-end communication strategy for relaying, interference alignment, and other applications. Recent efforts have demonstrated the advantages of employing unequal powers at the transmitters and decoding more than one linear combination at each receiver. However, neither of these techniques fit naturally within the original formulation of compute-and-forward. This paper proposes an expanded compute-and-forward framework that incorporates both of these possibilities and permits an intuitive interpretation in terms of signal levels. Within this framework, recent achievability and optimality results are unified and generalized." }
{ "title": "The Approximate Sum Capacity of the Symmetric Gaussian K-User Interference Channel", "abstract": "Abstract-Interference alignment has emerged as a powerful tool in the analysis of multi-user networks. Despite considerable recent progress, the capacity region of the Gaussian K-user interference channel is still unknown in general, in part due to the challenges associated with alignment on the signal scale using lattice codes. This paper develops a new framework for lattice interference alignment, based on the compute-and-forward approach. Within this framework, each receiver decodes by first recovering two or more linear combinations of the transmitted codewords with integer-valued coefficients and then solving these linear combinations for its desired codeword. For the special case of symmetric channel gains, this framework is used to derive the approximate sum capacity of the Gaussian interference channel, up to an explicitly defined outage set of the channel gains. The key contributions are the capacity lower bounds for the weak through strong interference regimes, where each receiver should jointly decode its own codeword along with part of the interfering codewords. As part of the analysis, it is shown that decoding K linear combinations of the codewords can approach the sum capacity of the K-user Gaussian multiple-access channel up to a gap of no more than K 2 log K bits." }
1307.2105
1206.0197
I. INTRODUCTION
With IF equalization, on the other hand, the ratios between the individual rates achieved by each user and the symmetric capacity tend to one for almost all channel gains as the signal-to-noise ratio (SNR) increases #REFR .
[ "Recent work #OTHEREFR has shown that the IF receiver can attain the capacity of the Gaussian MIMO channel to within a constant number of bits in an open-loop scenario (no CSIT), provided that an appropriate universal linear precoding operation is applied at the transmitter.", "Moreover, even without precoding at the transmitter, it is shown in #OTHEREFR that for almost every channel matrix the IF receiver attains the total degreesof-freedom (DoF) offered by the channel, even when the number of receive antennas is smaller than the number of transmit antennas and is unknown at the transmitter.", "This is in sharp contrast to standard linear receivers that cannot achieve any DoF in such scenarios.", "As an example consider the Muser Gaussian multiple-access channel (MAC) where each user is equipped with one transmit antenna and the receiver is also equipped with a single antenna.", "Obviously, applying the linear MMSE equalizer on the channel's output would result in highly suboptimal performance, as there are not enough observations to separate the transmitted signals at the receiver." ]
[ "Beyond its role as a low-complexity receiver architecture, IF also has several theoretical advantages.", "In particular, IF equalization exploits the closure of linear/lattice codebooks with respect to integer-linear combinations.", "In the last decade, lattice codes were found to play a key role in characterizing the fundamental limits of certain communication networks, see e.g. #OTHEREFR , #OTHEREFR - #OTHEREFR .", "A common feature of several of these lattice-based coding schemes is that, from the perspective of each receiver, they induce effective multiple-access channels with a reduced number of users, all of which employ the same lattice codebook.", "The achievable rates for a MAC where all users use the same lattice codebook is difficult to analyze #OTHEREFR , but can be lower bounded by the rates attained via the IF receiver #OTHEREFR ." ]
[ "channel gains" ]
background
{ "title": "Successive integer-forcing and its sum-rate optimality", "abstract": "Abstract-Integer-forcing receivers generalize traditional linear receivers for the multiple-input multiple-output channel by decoding integer-linear combinations of the transmitted streams, rather then the streams themselves. Previous works have shown that the additional degree of freedom in choosing the integer coefficients enables this receiver to approach the performance of maximum-likelihood decoding in various scenarios. Nonetheless, even for the optimal choice of integer coefficients, the additive noise at the equalizer's output is still correlated. In this work we study a variant of integer-forcing, termed successive integer-forcing, that exploits these noise correlations to improve performance. This scheme is the integerforcing counterpart of successive interference cancellation for traditional linear receivers. Similarly to the latter, we show that successive integer-forcing is capacity achieving when it is possible to optimize the rate allocation to the different streams. In comparison to standard successive interference cancellation receivers, the successive integer-forcing receiver offers more possibilities for capacity achieving rate tuples, and in particular, ones that are more balanced." }
{ "title": "The Approximate Sum Capacity of the Symmetric Gaussian K-User Interference Channel", "abstract": "Abstract-Interference alignment has emerged as a powerful tool in the analysis of multi-user networks. Despite considerable recent progress, the capacity region of the Gaussian K-user interference channel is still unknown in general, in part due to the challenges associated with alignment on the signal scale using lattice codes. This paper develops a new framework for lattice interference alignment, based on the compute-and-forward approach. Within this framework, each receiver decodes by first recovering two or more linear combinations of the transmitted codewords with integer-valued coefficients and then solving these linear combinations for its desired codeword. For the special case of symmetric channel gains, this framework is used to derive the approximate sum capacity of the Gaussian interference channel, up to an explicitly defined outage set of the channel gains. The key contributions are the capacity lower bounds for the weak through strong interference regimes, where each receiver should jointly decode its own codeword along with part of the interfering codewords. As part of the analysis, it is shown that decoding K linear combinations of the codewords can approach the sum capacity of the K-user Gaussian multiple-access channel up to a gap of no more than K 2 log K bits." }
1307.2105
1206.0197
A. Sum Rate Optimality of Successive IF
It was shown in #REFR that for any full-rank A ∈ Z M×M such procedure can always ensure that v m ∈ C m for all m for at least one mapping between codebooks and transmit antennas.
[ "For sake of brevity, we do not repeat the details, and only briefly illustrate the idea by a simple example.", "Assume that the number of transmit antennas is M = 2 and C 2 ⊂ C 1 are two nested linear codes over the prime field Z p with rates R 2 ≤ R 1 .", "Two nested lattice codebooks are constructed by mapping C 1 and C 2 to a p-PAM constellation, and the coarse lattice, in this case, is taken as Λ c = pZ n .", "The first antenna transmits a codeword t 1 ∈ C 1 and the second transmits t 2 ∈ C 2 . The effective channel (12) eff,2 to ensure correct decoding.", "Thus, the described procedure enables to \"allocate\" different rates to the different equations." ]
[ "Here, we combine this ingredient with the idea of using the decoded equations also for performing noise prediction. Namely, in the mth successive decoding step we compute", "where the role of the column vectors {l k } is to perform noise prediction, as before, and the role of the integer-valued column vectors {q k } is to reduce the number of lattice points participating in the remaining equations, such that only lattice points from C m , . . . , C M participate in V (m) . When doing so, however, one new issue arises.", "Reducing the rate of remaining equations using decoded ones is advantageous if the effective variances σ 2 eff,1 , . . .", ", σ 2 eff,M are monotonically increasing, such that the achievable computation rates are monotonically decreasing.", "Without noise prediction, one can always choose the decoding order such that this is satisfied, i.e., start with the best equation, then decode the second best and so on." ]
[ "antennas", "∈ Z M×M" ]
background
{ "title": "Successive integer-forcing and its sum-rate optimality", "abstract": "Abstract-Integer-forcing receivers generalize traditional linear receivers for the multiple-input multiple-output channel by decoding integer-linear combinations of the transmitted streams, rather then the streams themselves. Previous works have shown that the additional degree of freedom in choosing the integer coefficients enables this receiver to approach the performance of maximum-likelihood decoding in various scenarios. Nonetheless, even for the optimal choice of integer coefficients, the additive noise at the equalizer's output is still correlated. In this work we study a variant of integer-forcing, termed successive integer-forcing, that exploits these noise correlations to improve performance. This scheme is the integerforcing counterpart of successive interference cancellation for traditional linear receivers. Similarly to the latter, we show that successive integer-forcing is capacity achieving when it is possible to optimize the rate allocation to the different streams. In comparison to standard successive interference cancellation receivers, the successive integer-forcing receiver offers more possibilities for capacity achieving rate tuples, and in particular, ones that are more balanced." }
{ "title": "The Approximate Sum Capacity of the Symmetric Gaussian K-User Interference Channel", "abstract": "Abstract-Interference alignment has emerged as a powerful tool in the analysis of multi-user networks. Despite considerable recent progress, the capacity region of the Gaussian K-user interference channel is still unknown in general, in part due to the challenges associated with alignment on the signal scale using lattice codes. This paper develops a new framework for lattice interference alignment, based on the compute-and-forward approach. Within this framework, each receiver decodes by first recovering two or more linear combinations of the transmitted codewords with integer-valued coefficients and then solving these linear combinations for its desired codeword. For the special case of symmetric channel gains, this framework is used to derive the approximate sum capacity of the Gaussian interference channel, up to an explicitly defined outage set of the channel gains. The key contributions are the capacity lower bounds for the weak through strong interference regimes, where each receiver should jointly decode its own codeword along with part of the interfering codewords. As part of the analysis, it is shown that decoding K linear combinations of the codewords can approach the sum capacity of the K-user Gaussian multiple-access channel up to a gap of no more than K 2 log K bits." }
1605.02952
1206.0197
B. Summary of Paper Results
After defining this computation rate, we will use the new CoF protocol described in #REFR by Ordentlich et al.
[ "In this paper our principal goal is to apply the computation rate defined in #OTHEREFR by Nazer and Gastpar for GS-CIC." ]
[ "for transforming approximately the K−user GS-CIC to the 2−user case.", "This transformation will help us to calculate the achievable sum-rate at destinations.", "Without using any Channel State Information (CSI) at transmitters the performance showed in Fig.", "1 is reachable by using the CoF protocol described in #OTHEREFR .", "For the high values of SNR, we are interested in reducing the gap between the upper bound and the achievable sum-rate in strong and very strong interference regimes." ]
[ "new CoF protocol", "computation rate" ]
method
{ "title": "Structured Compute-and-Forward with phase precoding scheme: Beyond integers", "abstract": "The Compute-and-Forward (CoF) protocol in wireless networks have been employed for Gaussian Symmetric Real-valued Interference Channels (GS-RIC) to achieve approximately the capacity of such channels and making Interference Alignment (IA) by solving a system of linear equations at destinations. We aim to adapt this protocol to GS-CIC. At high Signal-to-Noise Ratio (SNR), this protocol shows a fading behavior of the achievable sum-rate for strong and very strong interference regimes. In complex field this random-like behavior is related to the phase of complex cross channel coefficients. To avoid this fading-like behavior, we consider Z[i]−lattice codes and we propose a phase precoding scheme for CoF protocol with partial feedback. In this method the phase of channel coefficients will then be fed back to the transmitters in order to choose the best precoder factors to control this random behavior. We further simulate the achievable sum-rate given by this scheme and showing significant gain improvement can be obtained in terms of limiting the random behavior of achievable sum-rate." }
{ "title": "The Approximate Sum Capacity of the Symmetric Gaussian K-User Interference Channel", "abstract": "Abstract-Interference alignment has emerged as a powerful tool in the analysis of multi-user networks. Despite considerable recent progress, the capacity region of the Gaussian K-user interference channel is still unknown in general, in part due to the challenges associated with alignment on the signal scale using lattice codes. This paper develops a new framework for lattice interference alignment, based on the compute-and-forward approach. Within this framework, each receiver decodes by first recovering two or more linear combinations of the transmitted codewords with integer-valued coefficients and then solving these linear combinations for its desired codeword. For the special case of symmetric channel gains, this framework is used to derive the approximate sum capacity of the Gaussian interference channel, up to an explicitly defined outage set of the channel gains. The key contributions are the capacity lower bounds for the weak through strong interference regimes, where each receiver should jointly decode its own codeword along with part of the interfering codewords. As part of the analysis, it is shown that decoding K linear combinations of the codewords can approach the sum capacity of the K-user Gaussian multiple-access channel up to a gap of no more than K 2 log K bits." }
1902.06090
1710.07145
Related work
Hence the fixed target part of #REFR corresponds to our current problem for the special case of advice of size 0.
[ "This line of research was continued, e.g., in #OTHEREFR .", "In #OTHEREFR , the authors concentrated on game-theoretic aspects of the scenario where multiple selfish pursuers compete to find a target, e.g., in a ring.", "The main result of #OTHEREFR is an optimal algorithm to sweep a plane in order to locate an unknown fixed target, where locating means to get the agent originating at point O to a point P such that the target is in the segment OP .", "In #OTHEREFR , the authors considered the generalization of the search problem in the plane to the case of several searchers.", "Efficient search for a fixed or a moving target in the plane, under complete ignorance on the part of the searching agent, was studied in #OTHEREFR ." ]
[ "However, while the results of #OTHEREFR are stated for any vision radius r > 0, it was tacitly assumed that r ≤ 1, and, as explained in section 2, these results do not hold for arbitrary r > 0.", "Algorithms with advice.", "The paradigm of algorithms with advice was used predominantly for tasks in graphs.", "Providing arbitrary items of information that can be used to increase efficiency of solutions to network problems has been proposed in #OTHEREFR . This approach was referred to as algorithms with advice.", "The advice, in the form of an arbitrary binary string, is given by a cooperating omniscient oracle either to the nodes of the network or to mobile agents performing some task in it." ]
[ "advice", "fixed target part" ]
background
{ "title": "Cost vs. Information Tradeoffs for Treasure Hunt in the Plane", "abstract": "A mobile agent has to find an inert treasure hidden in the plane. Both the agent and the treasure are modeled as points. This is a variant of the task known as treasure hunt. The treasure is at a distance at most D from the initial position of the agent, and the agent finds the treasure when it gets at distance r from it, called the vision radius. However, the agent does not know the location of the treasure and does not know the parameters D and r. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the tradeoffs between the amount of information held a priori by the agent and the cost of treasure hunt. Following the well-established paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the location of the treasure and the initial position of the agent. The size of advice given to the agent is the length of this binary string. For any size z of advice and any D and r, let OP T (z, D, r) be the optimal cost of finding the treasure for parameters z, D and r, if the agent has only an advice string of length z as input. We design treasure hunt algorithms working with advice of size z at cost O(OP T (z, D, r)) whenever r ≤ 1 or r ≥ 0.9D. For intermediate values of r, i.e., 1 < r < 0.9D, we design an almost optimal scheme of algorithms: for any constant α > 0, the treasure can be found at cost O(OP T (z, D, r) 1+α )." }
{ "title": "Reaching a Target in the Plane with no Information", "abstract": "A mobile agent has to reach a target in the Euclidean plane. Both the agent and the target are modeled as points. At the beginning, the agent is at distance at most D > 0 from the target. Reaching the target means that the agent gets at a sensing distance at most r > 0 from it. The agent has a measure of length and a compass. We consider two scenarios: in the static scenario the target is inert, and in the dynamic scenario it may move arbitrarily at any (possibly varying) speed bounded by v. The agent has no information about the parameters of the problem, in particular it does not know D, r or v. The goal is to reach the target at lowest possible cost, measured by the total length of the trajectory of the agent. Our main result is establishing the minimum cost (up to multiplicative constants) of reaching the target under both scenarios, and providing the optimal algorithm for the agent. For the static scenario the minimum cost is Θ((log D + log 1 r )D 2 /r), and for the dynamic scenario it is Θ((log M + log 1 r )M 2 /r), where M = max (D, v). Under the latter scenario, the speed of the agent in our algorithm grows exponentially with time, and we prove that for an agent whose speed grows only polynomially with time, this cost is impossible to achieve." }
1902.06090
1710.07145
Related work
However, while the results of #REFR are stated for any vision radius r > 0, it was tacitly assumed that r ≤ 1, and, as explained in section 2, these results do not hold for arbitrary r > 0.
[ "In #OTHEREFR , the authors concentrated on game-theoretic aspects of the scenario where multiple selfish pursuers compete to find a target, e.g., in a ring.", "The main result of #OTHEREFR is an optimal algorithm to sweep a plane in order to locate an unknown fixed target, where locating means to get the agent originating at point O to a point P such that the target is in the segment OP .", "In #OTHEREFR , the authors considered the generalization of the search problem in the plane to the case of several searchers.", "Efficient search for a fixed or a moving target in the plane, under complete ignorance on the part of the searching agent, was studied in #OTHEREFR .", "Hence the fixed target part of #OTHEREFR corresponds to our current problem for the special case of advice of size 0." ]
[ "Algorithms with advice.", "The paradigm of algorithms with advice was used predominantly for tasks in graphs.", "Providing arbitrary items of information that can be used to increase efficiency of solutions to network problems has been proposed in #OTHEREFR . This approach was referred to as algorithms with advice.", "The advice, in the form of an arbitrary binary string, is given by a cooperating omniscient oracle either to the nodes of the network or to mobile agents performing some task in it.", "In the first case, instead of advice, the term informative labeling schemes is sometimes used, if different nodes can get different information." ]
[ "vision radius" ]
background
{ "title": "Cost vs. Information Tradeoffs for Treasure Hunt in the Plane", "abstract": "A mobile agent has to find an inert treasure hidden in the plane. Both the agent and the treasure are modeled as points. This is a variant of the task known as treasure hunt. The treasure is at a distance at most D from the initial position of the agent, and the agent finds the treasure when it gets at distance r from it, called the vision radius. However, the agent does not know the location of the treasure and does not know the parameters D and r. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the tradeoffs between the amount of information held a priori by the agent and the cost of treasure hunt. Following the well-established paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the location of the treasure and the initial position of the agent. The size of advice given to the agent is the length of this binary string. For any size z of advice and any D and r, let OP T (z, D, r) be the optimal cost of finding the treasure for parameters z, D and r, if the agent has only an advice string of length z as input. We design treasure hunt algorithms working with advice of size z at cost O(OP T (z, D, r)) whenever r ≤ 1 or r ≥ 0.9D. For intermediate values of r, i.e., 1 < r < 0.9D, we design an almost optimal scheme of algorithms: for any constant α > 0, the treasure can be found at cost O(OP T (z, D, r) 1+α )." }
{ "title": "Reaching a Target in the Plane with no Information", "abstract": "A mobile agent has to reach a target in the Euclidean plane. Both the agent and the target are modeled as points. At the beginning, the agent is at distance at most D > 0 from the target. Reaching the target means that the agent gets at a sensing distance at most r > 0 from it. The agent has a measure of length and a compass. We consider two scenarios: in the static scenario the target is inert, and in the dynamic scenario it may move arbitrarily at any (possibly varying) speed bounded by v. The agent has no information about the parameters of the problem, in particular it does not know D, r or v. The goal is to reach the target at lowest possible cost, measured by the total length of the trajectory of the agent. Our main result is establishing the minimum cost (up to multiplicative constants) of reaching the target under both scenarios, and providing the optimal algorithm for the agent. For the static scenario the minimum cost is Θ((log D + log 1 r )D 2 /r), and for the dynamic scenario it is Θ((log M + log 1 r )M 2 /r), where M = max (D, v). Under the latter scenario, the speed of the agent in our algorithm grows exponentially with time, and we prove that for an agent whose speed grows only polynomially with time, this cost is impossible to achieve." }
1812.11534
1501.05083
Experiments and results
And further, we also compare our method with the other four deflation methods #REFR on the following four small systems.
[ "From Tables 2, we can see that for the examples with more variables and high multiplicity, our method has a better result regardless of the verified accuracy, computing time or the final scale.", "We also test the example in #OTHEREFR with the form: {x Table 2 has this form for n = 5.", "The method in #OTHEREFR can compute this example for n = 6 and it takes 659.59 seconds with the final size for 321 variables and 819 polynomials.", "We test the cases for n = 6, n = 1000 and n = 2000 with our code, it takes 0.228965 seconds, 165.274439 seconds and 1036.773847 seconds respectively without introducing new variables.", "For our method, although we introduce new variables, the size of our final deflated system is small in experiments." ]
[ "#OTHEREFR ) is below, where method A is in #OTHEREFR , method B is in #OTHEREFR , method C is in #OTHEREFR , method D is in #OTHEREFR , method E is our method VDSS.", "In Table 3 , we denote P oly the number of the polynomials of the final deflation system and V ar the number of the variables in the final deflation system.", "Noting that our final system does not always contain all the polynomials of the input system, therefore, we will contain the number of the different polynomials in the input system, which is not contained in the final system, into P oly.", "Method B Method C Method D Method E P oly V ar P oly V ar P oly V ar P oly V ar P oly V ar 1 16 4 22 4 22 4 16 4 8 4 2 24 11 11 3 12 3 12 3 5 3 3 32 17 6 2 6 2 6 2 4 3 4 96 41 54 3 54 3 22 3 5 3 In Table 3 , for system 1, 2 and 4, our method matches the best of the other four methods and simultaneously has a smallest deflated system in the five methods.", "For system 3, although our final system has one more variable than method D, we have less polynomials." ]
[ "four deflation methods" ]
method
{ "title": "A New Deflation Method For Verifying the Isolated Singular Zeros of Polynomial Systems", "abstract": "In this paper, we develop a new deflation technique for refining or verifying the isolated singular zeros of polynomial systems. Starting from a polynomial system with an isolated singular zero, by computing the derivatives of the input polynomials directly or the linear combinations of the related polynomials, we construct a new system, which can be used to refine or verify the isolated singular zero of the input system. In order to preserve the accuracy in numerical computation as much as possible, new variables are introduced to represent the coefficients of the linear combinations of the related polynomials. To our knowledge, it is the first time that considering the deflation problem of polynomial systems from the perspective of the linear combination. Some acceleration strategies are proposed to reduce the scale of the final system. We also give some further analysis of the tolerances we use, which can help us have a better understanding of our method. The experiments show that our method is effective and efficient. Especially, it works well for zeros with high multiplicities of large systems. It also works for isolated singular zeros of non-polynomial systems." }
{ "title": "Certifying isolated singular points and their multiplicity structure", "abstract": "This paper presents two new constructions related to singular solutions of polynomial systems. The first is a new deflation method for an isolated singular root. This construction uses a single linear differential form defined from the Jacobian matrix of the input, and defines the deflated system by applying this differential form to the original system. The advantages of this new deflation is that it does not introduce new variables and the increase in the number of equations is linear instead of the quadratic increase of previous methods. The second construction gives the coefficients of the so-called inverse system or dual basis, which defines the multiplicity structure at the singular root. We present a system of equations in the original variables plus a relatively small number of new variables. We show that the roots of this new system include the original singular root but now with multiplicity one, and the new variables uniquely determine the multiplicity structure. Both constructions are \"exact\", meaning that they permit one to treat all conjugate roots simultaneously and can be used in certification procedures for singular roots and their multiplicity structure with respect to an exact rational polynomial system." }
1712.03692
0804.2940
APPENDIX IV. PROOF OF THEOREM 3
Then we use these {(l, s)} to select feasible current channel states H 2 (m) and H 1 (m) by comparing the CCDF vectors inF P P P andF Q Q Q . By this way, we attain (40) and #REFR .
[ "Therefore, (97) implies #OTHEREFR .", "Based on the given transition matrices P P P and Q Q Q, we can further simplify the constraint (97) for the case m > k.", "Recall that the j-th entry of p p p i and q q q i are the transition probabilities from the i-th super state to the j-th super state of the Markov processes {H 1 (m)} and {H 2 (m)}, respectively.", "Given p p p i and q q q i , i ∈ {1, 2, · · · , N k }, we can form the corresponding CCDF matrices, respectively, asF P From Definition 1, for m > k we can equivalently express (97) byF p p p l (n) ≤F q q q s (n), ∀ n, with the constraint h 1 ( j) ≤ h 2 ( j), for the time index j < m.", "To fulfill the constraints h 1 ( j) ≤ h 2 ( j), j < m, we choose the row indices l and s of the transition matrices of P P P and Q Q Q, respectively, such that g(l) ≤ g(s) is ensured, which is due to the definition of the mapping g in #OTHEREFR and also the state values are listed in an increasing order." ]
[ "Combining with (39), we obtain the sufficient conditions to attain {H 1 (m)} ≤ st {H 2 (m)}, which implies the degradedness and completes the proof." ]
[ "feasible current channel" ]
method
{ "title": "On Stochastic Orders and Fast Fading Multiuser Channels with Statistical CSIT", "abstract": "In this paper, fading Gaussian multiuser channels are considered. If the channel is perfectly known to the transmitter, capacity has been established for many cases in which the channels may satisfy certain information theoretic orders such as degradedness or strong/very strong interference. Here, we study the case when only the statistics of the channels are known at the transmitter which is an open problem in general. The main contribution of this paper is the following: First, we introduce a framework to classify random fading channels based on their joint distributions by leveraging three schemes: maximal coupling, coupling, and copulas. The underlying spirit of all scheme is, we obtain an equivalent channel by changing the joint distribution in such a way that it now satisfies a certain information theoretic order while ensuring that the marginal distributions of the channels to the different users are not changed. The construction of this equivalent multi-user channel allows us to directly make use of existing capacity results, which includes Gaussian interference channels, Gaussian broadcast channels, and Gaussian wiretap channels. We also extend the framework to channels with a specific memory structure, namely, channels with finite-state, wherein the Markov fading broadcast channel is discussed as a special case. Several practical examples such as Rayleigh fading and Nakagami-m fading illustrate the applicability of the derived results." }
{ "title": "Secret Key Agreement by Soft-decision of Signals in Gaussian Maurer's Model", "abstract": "†a) , Nonmember, Shun WATANABE † †b) , Ryutaroh MATSUMOTO † †c) , and Tomohiko UYEMATSU † †d) , Members We consider the problem of secret key agreement in Gaussian Maurer's Model. In Gaussian Maurer's model, legitimate receivers, Alice and Bob, and a wire-tapper, Eve, receive signals randomly generated by a satellite through three independent memoryless Gaussian channels respectively. Then Alice and Bob generate a common secret key from their received signals. In this model, we propose a protocol for generating a common secret key by using the result of soft-decision of Alice and Bob's received signals. Then, we calculate a lower bound on the secret key rate in our proposed protocol. As a result of comparison with the protocol that only uses hard-decision, we found that the higher rate is obtained by using our protocol." }
1902.05566
1809.07053
C. BASELINES
NAIS #REFR : Neural Attentive Item Similarity Model designs an attention network for distinguishing different importance of all historical items in a user profile.
[ "We have modified VBPR to make it suitable for learning using the pointwise log loss function.", "FISM #OTHEREFR : This method has been described in detail in the previous section, such as Eq.2.", "The idea of integrating latent factor models with neighborhood-based models is consistent with our proposed Multimodal IRIS, so this method is very important as a baseline.", "NCF #OTHEREFR : Neural CF leverages an MLP to model the nonlinearities between user and item under matrix factorization framework.", "Due to the powerful representation of neural networks, this model achieves the best performance in the latent factor models." ]
[ "The item similarity framework using attention mechanism is part of Multimodal IRIS Image IRIS: As in the section III.C, Image IRIS is a special case for Multimodal IRIS with image features as input to the IRN, which can be used to directly compare with VBPR.", "Image-Add-Text IRIS: Image-add-Text IRIS is an extension based on Image IRIS, which is the image and text data weighted addition at the input side of IRN.", "Multimodal IRIS: It is the complete IRIS framework that uses both image and text features as input to IRN and incorporates a knowledge sharing unit between modalities.", "The choice of the above comparison methods makes our experiment more persuasive.", "VBPR is the most representative latent factor model combined with visual information." ]
[ "Similarity Model" ]
background
{ "title": "Interest-Related Item Similarity Model Based on Multimodal Data for Top-N Recommendation", "abstract": "Nowadays, the recommendation systems are applied in the fields of e-commerce, video websites, social networking sites, which bring great convenience to people's daily lives. The types of information are diversified and abundant in recommendation systems; therefore the proportion of unstructured multimodal data such as text, image, and video is increasing. However, due to the representation gap between different modalities, it is intractable to effectively use unstructured multimodal data to improve the efficiency of recommendation systems. In this paper, we propose an end-to-end multimodal interest-related item similarity model (multimodal IRIS) to provide recommendations based on the multimodal data source. Specifically, the multimodal IRIS model consists of three modules, i.e., multimodal feature learning module, the interest-related network (IRN) module, and item similarity recommendation module. The multimodal feature learning module adds knowledge sharing unit among different modalities. Then, IRN learns the interest relevance between target item and different historical items respectively. Finally, the multimodal feature learning, IRN, and item similarity recommendation modules are unified into an integrated system to achieve performance enhancements and to accommodate the addition or absence of different modal data. Extensive experiments on real-world datasets show that, by dealing with the multimodal data which people may pay more attention to when selecting items, the proposed multimodal IRIS significantly improves accuracy and interpretability on top-N recommendation task over the state-of-the-art methods. Top-N recommendation, multimodal data, multimodal interest-related item similarity, knowledge sharing unit." }
{ "title": "NAIS: Neural Attentive Item Similarity Model for Recommendation", "abstract": "Abstract-Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1], our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems." }
1905.02009
1809.07053
Problem Formulation
The preference relationship is constructed by finding the neighbors of the positive items, which can be interpreted as an item collaborative learning model #REFR .
[ "When sampling, we regard the neighbors as potential positive samples.", "For a user p and a time interval r, we assume that (1) user p prefers items with positive feedbacks to the others; (2) user p prefers the neighbors of the positive sample to the irrelevant ones; (3) positive samples fit the current time r better than the others; (4) neighbors of the positive sample fit the current time r better than the irrelevant ones.", "So for each (p, q, r) in D, we have the preference relationship, As such, we can generalize Equation (3) as follows:", "Here we can see that for each purchase record (p, q, r), user p prefers q to q and prefers q to q ." ]
[ "Most existing works learn to rank by constructing the potential set of each user #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "Now, we give an example to illustrate the advantage of our item collaborative learning model." ]
[ "item collaborative learning" ]
background
{ "title": "Visually-aware Recommendation with Aesthetic Features", "abstract": "Abstract-Visual information plays a critical role in human decision-making process. While recent developments on visually-aware recommender systems have taken the product image into account, none of them has considered the aesthetic aspect. We argue that the aesthetic factor is very important in modeling and predicting users' preferences, especially for some fashion-related domains like clothing and jewelry. This work addresses the need of modeling aesthetic information in visually-aware recommender systems. Technically speaking, we make three key contributions in leveraging deep aesthetic features: (1) To describe the aesthetics of products, we introduce the aesthetic features extracted from product images by a deep aesthetic network. We incorporate these features into recommender system to model users' preferences in the aesthetic aspect. (2) Since in clothing recommendation, time is very important for users to make decision, we design a new tensor decomposition model for implicit feedback data. The aesthetic features are then injected to the basic tensor model to capture the temporal dynamics of aesthetic preferences (e.g., seasonal patterns). (3) We also use the aesthetic features to optimize the learning strategy on implicit feedback data. We enrich the pairwise training samples by considering the similarity among items in the visual space and graph space; the key idea is that a user may likely have similar perception on similar items. We perform extensive experiments on several real-world datasets and demonstrate the usefulness of aesthetic features and the effectiveness of our proposed methods." }
{ "title": "NAIS: Neural Attentive Item Similarity Model for Recommendation", "abstract": "Abstract-Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1], our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems." }
1811.04392
1809.07053
Time Complexity Analysis
As reported in Reference #REFR , the time complexity of NAIS model is O(k k R + u ), where k denotes the attention factor.
[ "Equation (6)) is O(k R + u ), where k represents embedding size and R + u denotes the number of historical items interacted by user u.", "Compared to FISM, the additional time cost of making a prediction with DeepICF is caused by the hidden layers.", "For the lth hidden layer, the multiplication between matrices and vectors is the main operation, which can be done in O(d l −1 d l ), where d l −1 represents the size of the lth hidden layer and d 0 = k.", "The prediction layer only involves inner product of two vectors, for which the complexity is O(d L ).", "As such, the overall time complexity for evaluating a DeepICF model is" ]
[ "For the model of DeepICF+a, the additional time cost comes from the fully connected networks compared to NAIS.", "Therefore, the overall time cost of evaluating a prediction with DeepICF+a is" ]
[ "attention factor" ]
background
{ "title": "Deep Item-based Collaborative Filtering for Top-N Recommendation", "abstract": "Item-based Collaborative Filtering (ICF) has been widely adopted in recommender systems in industry, owing to its strength in user interest modeling and ease in online personalization. By constructing a user's profile with the items that the user has consumed, ICF recommends items that are similar to the user's profile. With the prevalence of machine learning in recent years, significant processes have been made for ICF by learning item similarity (or representation) from data. Nevertheless, we argue that most existing works have only considered linear and shallow relationships between items, which are insufficient to capture the complicated decision-making process of users. In this article, we propose a more expressive ICF solution by accounting for the nonlinear and higherorder relationships among items. Going beyond modeling only the second-order interaction (e.g., similarity) between two items, we additionally consider the interaction among all interacted item pairs by using nonlinear neural networks. By doing this, we can effectively model the higher-order relationship among items, capturing more complicated effects in user decision-making. For example, it can differentiate which historical itemsets in a user's profile are more important in affecting the user to make a purchase decision on an item. We treat this solution as a deep variant of ICF, thus term it as DeepICF. To justify our proposal, we perform empirical studies on two public datasets from MovieLens and Pinterest. Extensive experiments verify the highly positive effect of higher-order item interaction modeling with nonlinear neural networks. Moreover, we demonstrate that by more fine-grained second-order interaction modeling with attention network, the performance of our DeepICF method can be further improved. In the era of information overload, recommender systems play a pivotal role in many user-oriented online services such as E-commerce, content-sharing sites, and news portal. An effective recommender system not only can facilitate the information-seeking process of users but also can create customer loyalty and increase profit for the company. With such an important role in online information systems, recommendation has become an active topic of research and attracted increasing attention in information-retrieval and data-mining communities [16, 39, 41, 45] . Among various recommendation strategies, collaborative filtering (CF) is now the dominant one and has been widely adopted in industry [25, 36] . By leveraging user-item interaction data to predict user preference, CF is mostly used in the candidate selection phase of a recommender system [42] , which is complemented by an integrated ranking engine that integrates various signal to rank the candidates selected by CF. Generally speaking, CF techniques can be divided into two types-user-based and item-based approaches. The matrix factorization (MF) model [17] is a representative user-based CF method (short for UCF), which represents a user with an ID and projects the ID into the same embedding space of items; then the relevance score between a useritem pair is estimated as the inner product of the user embedding and item embedding. In contrast, item-based CF (short for ICF) represents a user with her historically interacted items, using the similarity between the target item and interacted items to estimate the user-item relevance [15, 36] . Despite the popularity of MF in recommendation research, there are several advantages of ICF over UCF. First, by representing a user with her consumed items, ICF encodes more signal in its input than UCF that simply uses an ID to represent a user. This provides ICF more potential to improve both the accuracy [9] and interpretability [36] of user preference modeling. For example, there are several empirical evidences on accuracy superiority of ICF over UCF methods for top-N recommendation [8, 9, 43] ; and ICF can interpret a recommended item as its high similarity with some items that the user has consumed before, which would be more acceptable by users than \"similar users\"-based explanation scheme [47] . Second, the composability of ICF in user preference modeling makes it easier to implement online personalization [15] . For example, when a user has new purchases, instead of re-training model parameters to refresh the recommendation list, ICF can approximate the refreshed list by simply retrieving items that are similar to the new purchased items. Such a strategy has successfully provided instant personalization in YouTube based on user recent watches (cf. Section 6.2.3 Instant Recommendation of Reference [3]). By contrast, UCF methods like MF associate model parameters with a user ID, making them compulsory to update model parameters to refresh the recommendation list for a user (cf. the online-update strategy for MF [17, 30] ). Early ICF approaches use statistical measures such as Pearson correlation and cosine similarity to quantify the similarity between two items [32] . However, such methods typically require extensive manual tuning on the similarity measure to make them perform well, and it is non-trivial to adapt a well-tuned method to a new dataset or dataset of a new product domain. In recent years, data-driven methods have been developed to learn item similarity from data, among which two representative methods are the sparse linear method (SLIM) [27] and factored item similarity model (FISM) [19] . In SLIM, the item-item similarity matrix is directly learned with additional constraints on sparsity and non-negativity; in FISM, the similarity between two items is factorized" }
{ "title": "NAIS: Neural Attentive Item Similarity Model for Recommendation", "abstract": "Abstract-Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1], our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems." }
1811.04392
1809.07053
33:14
The smooth hyper-paprameter β is consistent with the value when best results are achieved in He's #REFR work.
[ "As for the embedding size k, we evaluated the values of #OTHEREFR 64] in our experiments.", "For a fair comparison, we trained FISM by optimizing the same objective function of binary cross-entropy loss with the optimizer Adagrad.", "For our DeepICF models, we initialized them with FISM embeddings, which resulted in better performance and faster convergence.", "And we randomly initialized other model parameters with a Gaussian distribution wherein the value of mean and standard deviation is 0 and 0.01, respectively.", "The learning rate was searched in [0.001, 0.05, 0.01] and the value of α was experimented in the range of [0, 0.1, . . . , 0.9, 1]." ]
[ "Without additional explanation, we leveraged three hidden layers for MLP structure.", "We implemented our DeepICF models based on Tensorflow, 3 which will be released publicly once acceptance." ]
[ "best results" ]
result
{ "title": "Deep Item-based Collaborative Filtering for Top-N Recommendation", "abstract": "Item-based Collaborative Filtering (ICF) has been widely adopted in recommender systems in industry, owing to its strength in user interest modeling and ease in online personalization. By constructing a user's profile with the items that the user has consumed, ICF recommends items that are similar to the user's profile. With the prevalence of machine learning in recent years, significant processes have been made for ICF by learning item similarity (or representation) from data. Nevertheless, we argue that most existing works have only considered linear and shallow relationships between items, which are insufficient to capture the complicated decision-making process of users. In this article, we propose a more expressive ICF solution by accounting for the nonlinear and higherorder relationships among items. Going beyond modeling only the second-order interaction (e.g., similarity) between two items, we additionally consider the interaction among all interacted item pairs by using nonlinear neural networks. By doing this, we can effectively model the higher-order relationship among items, capturing more complicated effects in user decision-making. For example, it can differentiate which historical itemsets in a user's profile are more important in affecting the user to make a purchase decision on an item. We treat this solution as a deep variant of ICF, thus term it as DeepICF. To justify our proposal, we perform empirical studies on two public datasets from MovieLens and Pinterest. Extensive experiments verify the highly positive effect of higher-order item interaction modeling with nonlinear neural networks. Moreover, we demonstrate that by more fine-grained second-order interaction modeling with attention network, the performance of our DeepICF method can be further improved. In the era of information overload, recommender systems play a pivotal role in many user-oriented online services such as E-commerce, content-sharing sites, and news portal. An effective recommender system not only can facilitate the information-seeking process of users but also can create customer loyalty and increase profit for the company. With such an important role in online information systems, recommendation has become an active topic of research and attracted increasing attention in information-retrieval and data-mining communities [16, 39, 41, 45] . Among various recommendation strategies, collaborative filtering (CF) is now the dominant one and has been widely adopted in industry [25, 36] . By leveraging user-item interaction data to predict user preference, CF is mostly used in the candidate selection phase of a recommender system [42] , which is complemented by an integrated ranking engine that integrates various signal to rank the candidates selected by CF. Generally speaking, CF techniques can be divided into two types-user-based and item-based approaches. The matrix factorization (MF) model [17] is a representative user-based CF method (short for UCF), which represents a user with an ID and projects the ID into the same embedding space of items; then the relevance score between a useritem pair is estimated as the inner product of the user embedding and item embedding. In contrast, item-based CF (short for ICF) represents a user with her historically interacted items, using the similarity between the target item and interacted items to estimate the user-item relevance [15, 36] . Despite the popularity of MF in recommendation research, there are several advantages of ICF over UCF. First, by representing a user with her consumed items, ICF encodes more signal in its input than UCF that simply uses an ID to represent a user. This provides ICF more potential to improve both the accuracy [9] and interpretability [36] of user preference modeling. For example, there are several empirical evidences on accuracy superiority of ICF over UCF methods for top-N recommendation [8, 9, 43] ; and ICF can interpret a recommended item as its high similarity with some items that the user has consumed before, which would be more acceptable by users than \"similar users\"-based explanation scheme [47] . Second, the composability of ICF in user preference modeling makes it easier to implement online personalization [15] . For example, when a user has new purchases, instead of re-training model parameters to refresh the recommendation list, ICF can approximate the refreshed list by simply retrieving items that are similar to the new purchased items. Such a strategy has successfully provided instant personalization in YouTube based on user recent watches (cf. Section 6.2.3 Instant Recommendation of Reference [3]). By contrast, UCF methods like MF associate model parameters with a user ID, making them compulsory to update model parameters to refresh the recommendation list for a user (cf. the online-update strategy for MF [17, 30] ). Early ICF approaches use statistical measures such as Pearson correlation and cosine similarity to quantify the similarity between two items [32] . However, such methods typically require extensive manual tuning on the similarity measure to make them perform well, and it is non-trivial to adapt a well-tuned method to a new dataset or dataset of a new product domain. In recent years, data-driven methods have been developed to learn item similarity from data, among which two representative methods are the sparse linear method (SLIM) [27] and factored item similarity model (FISM) [19] . In SLIM, the item-item similarity matrix is directly learned with additional constraints on sparsity and non-negativity; in FISM, the similarity between two items is factorized" }
{ "title": "NAIS: Neural Attentive Item Similarity Model for Recommendation", "abstract": "Abstract-Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1], our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems." }
1911.10936
1902.02368
Introduction
Note that if the comb strategy is asymptotically optimal, the solution to (1.1) should also satisfy a linear PDE with comb strategy based coefficients (see (3.2) ), which is shown to be true in the geometric horizon setting in #REFR .
[ "And very recently in #OTHEREFR , Kobzar, Kohn and Wang found lower and upper bounds for the optimal regret for finite stopping problem by constructing certain sub-and supersolutions of (1.1) following the method of #OTHEREFR .", "Their results are only tight for N = 3 and improved those of #OTHEREFR .", "Let us also mention the Multiplicative Weights Algorithm, which is asymptotically optimal as both N, M → ∞ (see #OTHEREFR ).", "In this paper we construct an explicit solution to (1.1) for N = 4 with finite stopping.", "We build our candidate solution based on the conjecture of #OTHEREFR , which states that the comb strategy is asymptotically optimal for any number of experts in both finite and geometric horizon problem." ]
[ "The key observation is that the PDE of the finite horizon case can, at least heuristically, be obtained by applying the inverse Laplace transform to the solution of #OTHEREFR extended to the complex plane.", "This is at a heuristic level because these linear PDEs, unlike (1.1), may not have unique solution and the analytic extension of our function to the complex plane is not well-behaved.", "In Appendix A, we perform this formal inverse Laplace transform and obtain the explicit expression in (1.2).", "We show in Theorem 3.1 that (1.2) is the classical solution of (3.2).", "In Theorem 3.2, we show that it also satisfies (1.1) by verifying that the comb strategy is optimal for the limiting problem." ]
[ "comb strategy based" ]
background
{ "title": "Finite-Time 4-Expert Prediction Problem", "abstract": "Abstract. We explicitly solve the nonlinear PDE that is the continuous limit of dynamic programming equation expert prediction problem in finite horizon setting with N = 4 experts. The expert prediction problem is formulated as a zero sum game between a player and an adversary. By showing that the solution is C 2 , we are able to show that the comb strategies, as conjectured in [13] , form an asymptotic Nash equilibrium. We also prove the \"Finite vs Geometric regret\" conjecture proposed in [12] for N = 4, and show that this conjecture in fact follows from the conjecture that the comb strategies are optimal." }
{ "title": "On the asymptotic optimality of the comb strategy for prediction with expert advice", "abstract": "For the problem of prediction with expert advice in the adversarial setting with geometric stopping, we compute the exact leading order expansion for the long time behavior of the value function. Then, we use this expansion to prove that as conjectured in Gravin et al. [12], the comb strategies are indeed asymptotically optimal for the adversary in the case of 4 experts. Mathematics Subject Classification (2010): 68T05, 35J60, 35L02." }
1911.10936
1902.02368
Terminal condition.
Due to the continuity of x → u T (t, x) and the symmetry of u T , we only need to show the equality u T (T, x) = Φ(x) for the case x (1) < x (2) < x (3) < x #REFR .
[ "Proof." ]
[ "Recall the definition of sine integral function si(x) and cosine integral function Ci(x) (see e.g. #OTHEREFR ),", "and denote", "Under the assumption x (1) < x (2) < x (3) < x #OTHEREFR , it is easy to check the following inequalities", "According to (3.1), we have", "Note that" ]
[ "u" ]
background
{ "title": "Finite-Time 4-Expert Prediction Problem", "abstract": "Abstract. We explicitly solve the nonlinear PDE that is the continuous limit of dynamic programming equation expert prediction problem in finite horizon setting with N = 4 experts. The expert prediction problem is formulated as a zero sum game between a player and an adversary. By showing that the solution is C 2 , we are able to show that the comb strategies, as conjectured in [13] , form an asymptotic Nash equilibrium. We also prove the \"Finite vs Geometric regret\" conjecture proposed in [12] for N = 4, and show that this conjecture in fact follows from the conjecture that the comb strategies are optimal." }
{ "title": "On the asymptotic optimality of the comb strategy for prediction with expert advice", "abstract": "For the problem of prediction with expert advice in the adversarial setting with geometric stopping, we compute the exact leading order expansion for the long time behavior of the value function. Then, we use this expansion to prove that as conjectured in Gravin et al. [12], the comb strategies are indeed asymptotically optimal for the adversary in the case of 4 experts. Mathematics Subject Classification (2010): 68T05, 35J60, 35L02." }
1904.11401
1902.02368
Introduction
Our PDE characterization of the value function has already seen an interesting application: in #REFR , Bayraktar et al use it to obtain an explicit solution for the geometric stopping version of the game with n = 4 experts.
[ "Abernethy et al #OTHEREFR consider a game played until a fixed number of losses is incurred by an expert.", "Luo and Schapire #OTHEREFR investigate a version of the game with a randomly chosen final time. In #OTHEREFR Rakhlin et al. present algorithms using \"random play out\".", "A recent paper by Gravin, Peres, and Sivan #OTHEREFR analyzes the same problems that we consider here.", "That work uses discrete methods and connections to random walks; ours can be viewed as providing its continuous-time analogue.", "For more detail on the relationship between our work and #OTHEREFR , see Subsection 3.5." ]
[ "There are other instances in the literature where scaling limits of multistep decision processes lead to parabolic or elliptic PDEs.", "For example, the work of Kohn and Serfaty on two-person game interpretations of motion by curvature #OTHEREFR and many other PDE problems #OTHEREFR has this character.", "So does the work of Peres, Sheffield, Schramm, and Wilson connecting the 'tug-of-war' game to the infinity-Laplacian #OTHEREFR and the p-Laplacian #OTHEREFR (this work has seen many extensions, e.g. #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR ).", "A particular advantage of our treatment is that it is not limited to the classical payoff function in the online machine learning literature, namely regret with respect to the best expert ϕ(x) = max k {x k }, where x k is regret with respect to expert k.", "In fact, it works for a more general class of payoff functions, namely functions ϕ that are globally Lipschitz continuous, non-decreasing, symmetric in their dependent variables x k , have linear growth at ∞, and satisfy ϕ(x 1 + c, ..., x n + c) = ϕ(x 1 , ..., x n ) + c." ]
[ "game", "geometric stopping version" ]
background
{ "title": "Prediction with Expert Advice: a PDE Perspective", "abstract": "This work addresses a classic problem of online prediction with expert advice. We assume an adversarial opponent, and we consider both the finite-horizon and random-stopping versions of this zero-sum, twoperson game. Focusing on an appropriate continuum limit and using methods from optimal control, we characterize the value of the game as the viscosity solution of a certain nonlinear partial differential equation. The analysis also reveals the predictor's and the opponent's minimax optimal strategies. Our work provides, in particular, a continuum perspective on recent work of Gravin, Peres, and Sivan (Proc SODA 2016). Our techniques are similar to those of Kohn and Serfaty (Comm Pure Appl Math 2010), where scaling limits of some two-person games led to elliptic or parabolic PDEs." }
{ "title": "On the asymptotic optimality of the comb strategy for prediction with expert advice", "abstract": "For the problem of prediction with expert advice in the adversarial setting with geometric stopping, we compute the exact leading order expansion for the long time behavior of the value function. Then, we use this expansion to prove that as conjectured in Gravin et al. [12], the comb strategies are indeed asymptotically optimal for the adversary in the case of 4 experts. Mathematics Subject Classification (2010): 68T05, 35J60, 35L02." }
1906.01017
1805.11046
Surgical Attack Using Rowhammer
Even if the ratio between the total number of objects and target objects may seem often unbalanced in favor of the small ones #REFR , we found that the number of vulnerable parameters in the target objects is still significant (see Table 4 ).
[ "To locate the parameters of the attacker's interest (i.e., vulnerable parameters) within the memory page, she needs to find page-aligned data in the victim model.", "Modern memory allocators improve performances by storing large objects (usually multiples of the page size) page-aligned whereas smaller objects are not.", "Thus, we first analyze the allocations performed by the PyTorch framework running on Python to understand if it performs such optimized page-aligned allocations for large objects similar to other programs #OTHEREFR .", "We discovered this to be the case for all the objects larger than 1 MB-i.e., our attacker needs to target the parameters such as weight, bias, and so on, stored as tensor objects in layers, larger than 1 MB.", "Then, again focusing on the ImageNet models, we analyzed them to identify the objects that satisfy this condition." ]
[ "Furthermore, it is important to note that when considering a surgical attacker, she only needs one single vulnerable template to compromise the victim model, and there is only 1,024 possible offsets where we can store a 4-byte parameter within a 4 KB page.", "Memory massaging.", "After finding a vulnerable template, the attacker needs to massage the memory to land the victim's data on the vulnerable template.", "This can be achieved, for instance, by exploiting memory deduplication #OTHEREFR .", "Memory deduplication is a system-level memory optimization that merges read-only pages for different processes or VMs when they contain the same data." ]
[ "vulnerable parameters", "number" ]
background
{ "title": "Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks", "abstract": "Deep neural networks (DNNs) have been shown to tolerate \"brain damage\": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by hardware fault attacks. We study the effects of bitwise corruptions on 19 DNN models-six architectures on three image classification tasks-and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%. We employ simple heuristics to efficiently identify the parameters likely to be vulnerable. We estimate that 40-50% of the parameters in a model might lead to an accuracy drop greater than 10% when individually subjected to such singlebit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary hardware fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer-enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99%) even with single bit-flip corruptions and no knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world fault attacks. We conclude by discussing possible mitigations and future research directions towards fault attack-resilient DNNs." }
{ "title": "Scalable Methods for 8-bit Training of Neural Networks", "abstract": "Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range BatchNormalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors' knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset." }
1909.02384
1805.11046
II. RELATED WORK
QBP2 #REFR replaces the conventional BN with range BN and constrains W, A, and E to INT8 values, while calculating G with FP MACS.
[ "The study of inference quantization is sufficient for the deep learning inference accelerators.", "Whereas, this is not enough for efficient online learning accelerators because only the data in the forward pass are considered.", "Training quantization: To further extend the quantization towards the training stage, DoReFa #OTHEREFR trains DNNs with low bit-width W, A, and G, while leaving E and BN unprocessed.", "MP [14] and MP-INT #OTHEREFR use FP16 and INT16 values, respectively, to constrain W, A, and G.", "Recently, FP8 #OTHEREFR further pushes W, A, G, E, and U to #OTHEREFR , and 16-bit FP values, respectively, still leaving BN untouched." ]
[ "Recently, WAGE #OTHEREFR adopts a layer-wise scaling factor instead of using the BN layer and quantizes W, A, G, E, and U to #OTHEREFR , and 8 bits, respectively.", "Despite its thorough quantization, WAGE is difficult to be applied to large-scale DNNs due to the absence of powerful BN layers.", "In summary, there still lacks a complete INT8 quantization framework for the training of large-scale DNNs with high accuracy." ]
[ "INT8 values", "range BN" ]
method
{ "title": "Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers", "abstract": "Abstract-Deep neural network (DNN) quantization converting floating-point (FP) data in the network to integers (INT) is an effective way to shrink the model size for memory saving and simplify the operations for compute acceleration. Recently, researches on DNN quantization develop from inference to training, laying a foundation for the online training on accelerators. However, existing schemes leaving batch normalization (BN) untouched during training are mostly incomplete quantization that still adopts high precision FP in some parts of the data paths. Currently, there is no solution that can use only low bit-width INT data during the whole training process of largescale DNNs with acceptable accuracy. In this work, through decomposing all the computation steps in DNNs and fusing three special quantization functions to satisfy the different precision requirements, we propose a unified complete quantization framework termed as \"WAGEUBN\" to quantize DNNs involving all data paths including W (Weights), A (Activation), G (Gradient), E (Error), U (Update), and BN. Moreover, the Momentum optimizer is also quantized to realize a completely quantized framework. Experiments on ResNet18/34/50 models demonstrate that WAGEUBN can achieve competitive accuracy on ImageNet dataset. For the first time, the study of quantization in largescale DNNs is advanced to the full 8-bit INT level. In this way, all the operations in the training and inference can be bit-wise operations, pushing towards faster processing speed, decreased memory cost, and higher energy efficiency. Our throughout quantization framework has great potential for future efficient portable devices with online learning ability." }
{ "title": "Scalable Methods for 8-bit Training of Neural Networks", "abstract": "Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range BatchNormalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors' knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset." }
1806.00496
1706.00112
Example 3.
The sub-packetization size in the original code of #REFR satisfies ≈ (n ) n when n grows to infinity, thus in our new code it satisfies ≈ a(n ) n for some integer a.
[ "The RS #OTHEREFR 26) code in #OTHEREFR achieves the normalized repair bandwidth of b (n−1) < 0.54, while it requires = 2.7 × 10 8 .", "Our scheme has a much smaller compared to the scheme in #OTHEREFR while the repair bandwidth is a bit larger.", "In the above theorem, we extend #OTHEREFR to a linearly larger sub-packetization and an exponentially larger code length, which means that for the same code length, we can have a much smaller sub-packetization level.", "Next, we show our second realization of the scheme in multiple cosets, which is inspired by #OTHEREFR .", "Different from the previous constructions, this one allows any number of helpers, k ≤ d ≤ n − 1." ]
[ "Proof: We first prove the case when a and are relatively prime using Lemma 3, the case when a and are not necessarily relatively prime are proved in Appendix A.", "We use the code in #OTHEREFR as the original code, where the number of helpers is d .", "We set n − k = n − k and calculate the repair bandwidth for d helpers from the original code when d = d − k + k .", "Let us define F q (α) to be the field obtained by adjoining α to the base field B.", "Similarly, we define F q (α 1 , α 2 , . . . , α n ) for adjoining multiple elements." ]
[ "sub-packetization size" ]
background
{ "title": "On the Sub-Packetization Size and the Repair Bandwidth of Reed-Solomon Codes", "abstract": "Reed-Solomon (RS) codes are widely used in distributed storage systems. In this paper, we study the repair bandwidth and sub-packetization size of RS codes. The repair bandwidth is defined as the amount of transmitted information from surviving nodes to a failed node. The RS code can be viewed as a polynomial over a finite field G F(q ) evaluated at a set of points, where is called the sub-packetization size. Smaller bandwidth reduces the network traffic in distributed storage, and smaller facilitates the implementation of RS codes with lower complexity. Recently, Guruswami and Wootters proposed a repair method for RS codes when the evaluation points are the entire finite field. While the sub-packetization size can be arbitrarily small, the repair bandwidth is higher than the minimum storage regenerating (MSR) bound. Tamo, Ye, and Barg achieved the MSR bound but the sub-packetization size grows faster than the exponential function of the number of the evaluation points. In this paper, we present code constructions and repair schemes that extend these results to accommodate different sizes of the evaluation points. In other words, we design schemes that provide points in between. These schemes provide a flexible tradeoff between the sub-packetization size and the repair bandwidth. In addition, we generalize our schemes to manage multiple failures." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1806.00496
1706.00112
Example 3.
We use the code in #REFR as the original code, where the number of helpers is d .
[ "In the above theorem, we extend #OTHEREFR to a linearly larger sub-packetization and an exponentially larger code length, which means that for the same code length, we can have a much smaller sub-packetization level.", "Next, we show our second realization of the scheme in multiple cosets, which is inspired by #OTHEREFR .", "Different from the previous constructions, this one allows any number of helpers, k ≤ d ≤ n − 1.", "The sub-packetization size in the original code of #OTHEREFR satisfies ≈ (n ) n when n grows to infinity, thus in our new code it satisfies ≈ a(n ) n for some integer a.", "Proof: We first prove the case when a and are relatively prime using Lemma 3, the case when a and are not necessarily relatively prime are proved in Appendix A." ]
[ "We set n − k = n − k and calculate the repair bandwidth for d helpers from the original code when d = d − k + k .", "Let us define F q (α) to be the field obtained by adjoining α to the base field B.", "Similarly, we define F q (α 1 , α 2 , . . . , α n ) for adjoining multiple elements.", "Let α i be an element of order q i over B.", "The code is defined in the field F = G F(q ) = G F(q sq 1 q 2 ...,q n ), which is the degree-s extension of F q (α 1 , α 2 , . . . , α n )." ]
[ "code" ]
method
{ "title": "On the Sub-Packetization Size and the Repair Bandwidth of Reed-Solomon Codes", "abstract": "Reed-Solomon (RS) codes are widely used in distributed storage systems. In this paper, we study the repair bandwidth and sub-packetization size of RS codes. The repair bandwidth is defined as the amount of transmitted information from surviving nodes to a failed node. The RS code can be viewed as a polynomial over a finite field G F(q ) evaluated at a set of points, where is called the sub-packetization size. Smaller bandwidth reduces the network traffic in distributed storage, and smaller facilitates the implementation of RS codes with lower complexity. Recently, Guruswami and Wootters proposed a repair method for RS codes when the evaluation points are the entire finite field. While the sub-packetization size can be arbitrarily small, the repair bandwidth is higher than the minimum storage regenerating (MSR) bound. Tamo, Ye, and Barg achieved the MSR bound but the sub-packetization size grows faster than the exponential function of the number of the evaluation points. In this paper, we present code constructions and repair schemes that extend these results to accommodate different sizes of the evaluation points. In other words, we design schemes that provide points in between. These schemes provide a flexible tradeoff between the sub-packetization size and the repair bandwidth. In addition, we generalize our schemes to manage multiple failures." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1806.00496
1706.00112
Example 3.
Compared with the scheme in #REFR , which need = 2.4 × 10 19 and b (n−1) = 0.25, we provide a tradeoff between and b.
[ "Assuming the failed node is f (α * ) and α * ∈ α 1 C, then we repair it in F 1 and set the polynomials in (59).", "We can easily check that when x ∈ α 1 C, rank F 1 ({η t p 1 (x), η t p 2 (x), . . .", ", η t p s (x) : t ∈ [5]}) = 20 and when x in other cosets, rank F 1 ({η t p 1 (x), η t p 2 (x), . . .", ", η t p s (x) : t ∈ [5]}) = 5.", "Therefore, we transmit 100 symbols in F 1 , which can be normalized to b (n−1) = 0.4545." ]
[ "It should be noted that in this example, the RS(12, 8) code needs to be extended from an RS(3, −1) code, which does not exist.", "However, since we only used the properties of the polynomials p j (x) and p j (x), the new RS(12, 8) code still works." ]
[ "b" ]
method
{ "title": "On the Sub-Packetization Size and the Repair Bandwidth of Reed-Solomon Codes", "abstract": "Reed-Solomon (RS) codes are widely used in distributed storage systems. In this paper, we study the repair bandwidth and sub-packetization size of RS codes. The repair bandwidth is defined as the amount of transmitted information from surviving nodes to a failed node. The RS code can be viewed as a polynomial over a finite field G F(q ) evaluated at a set of points, where is called the sub-packetization size. Smaller bandwidth reduces the network traffic in distributed storage, and smaller facilitates the implementation of RS codes with lower complexity. Recently, Guruswami and Wootters proposed a repair method for RS codes when the evaluation points are the entire finite field. While the sub-packetization size can be arbitrarily small, the repair bandwidth is higher than the minimum storage regenerating (MSR) bound. Tamo, Ye, and Barg achieved the MSR bound but the sub-packetization size grows faster than the exponential function of the number of the evaluation points. In this paper, we present code constructions and repair schemes that extend these results to accommodate different sizes of the evaluation points. In other words, we design schemes that provide points in between. These schemes provide a flexible tradeoff between the sub-packetization size and the repair bandwidth. In addition, we generalize our schemes to manage multiple failures." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1710.07216
1706.00112
B. Repairing Reed-Solomon codes
Very recently, Tamo and the present authors #REFR gave the first explicit construction of pn, kq RS codes with p1, dq-optimal repair property for any given k ă d ă n.
[ "Subsequently, the present authors #OTHEREFR used the general linear repair scheme in #OTHEREFR to construct an explicit family of RS codes with asymptotically optimal repair bandwidth, and very recently Chowdhury and Vardy #OTHEREFR further developed the results of #OTHEREFR , #OTHEREFR .", "In #OTHEREFR , Dau and Milenkovic generalized the scheme in #OTHEREFR and extended their results to a larger set of parameters.", "Several works also extended the framework of #OTHEREFR to repair more than one erasure (node failure) for RS codes #OTHEREFR , #OTHEREFR .", "At the same time, #OTHEREFR as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound (3) with equality (no matter for repairing single erasure or multiple erasures).", "All the previous papers (apart from #OTHEREFR ) focused on small sub-packetization regime, and the repair bandwidth of their constructions is rather far from the cut-set bound." ]
[ "The sub-packetization value of this construction is l \" exppp1`op1qqn log nq.", "The authors of #OTHEREFR also proved an almost matching lower bound on l, showing that for scalar MDS codes (including the RS codes) to meet the cut-set bound with linear repair scheme, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq.", "In this paper, we extend the construction in #OTHEREFR to the repair of multiple erasures.", "More precisely, given any n ą k, we construct explicit pn, kq RS codes with the universal ph, dq-optimal repair property for all h ď r and all k ď d ď n´h simultaneously.", "In other words, our codes can repair any number of failed nodes from any set of helper nodes with repair bandwidth achieving the cut-set bound." ]
[ "p1, dq-optimal repair" ]
background
{ "title": "Repairing Reed-Solomon codes: Universally achieving the cut-set bound for any number of erasures", "abstract": "The repair bandwidth of a code is the minimum amount of data required to repair one or several failed nodes (erasures). For MDS codes, the repair bandwidth is bounded below by the so-called cut-set bound, and codes that meet this bound with equality are said to support optimal repair of one or multiple failed nodes. We consider the problem of repairing multiple failed nodes of Reed-Solomon (RS) codes. In a recent work with I. Tamo (Proc. IEEE FOCS 2017), we gave the first explicit construction of RS codes with optimal repair of any single failed node from any subset of helper nodes. In this paper, we construct explicit RS codes that universally achieve the cut-set bound for the repair of any number of failed nodes from any set of helper nodes. Moreover, the node size of our codes is close to the optimal (smallest possible) node size of codes with such property." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1710.07216
1706.00112
B. Repairing Reed-Solomon codes
As in #REFR , we invoke classic results of analytic number theory to describe the behavior of (5) for large n.
[ "The authors of #OTHEREFR also proved an almost matching lower bound on l, showing that for scalar MDS codes (including the RS codes) to meet the cut-set bound with linear repair scheme, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq.", "In this paper, we extend the construction in #OTHEREFR to the repair of multiple erasures.", "More precisely, given any n ą k, we construct explicit pn, kq RS codes with the universal ph, dq-optimal repair property for all h ď r and all k ď d ď n´h simultaneously.", "In other words, our codes can repair any number of failed nodes from any set of helper nodes with repair bandwidth achieving the cut-set bound.", "The value of sub-packetization l of our construction equals r! times the product of the first n distinct primes in an arithmetic progression," ]
[ "In particular, the prime number theorem in arithmetic progressions (for instance, [20, p.121] ) yields asymptotic estimates for l; see #OTHEREFR for a more detailed discussion.", "For fixed r and growing n, we have l \" e p1`op1qqn log n , which is asymptotically the same as the result of #OTHEREFR .", "According to the lower bound (4), when the code rate k{n is close to 1, the sub-packetization value of our codes is close to the optimal value among all scalar linear MDS codes with the optimal repair property." ]
[ "analytic number theory" ]
background
{ "title": "Repairing Reed-Solomon codes: Universally achieving the cut-set bound for any number of erasures", "abstract": "The repair bandwidth of a code is the minimum amount of data required to repair one or several failed nodes (erasures). For MDS codes, the repair bandwidth is bounded below by the so-called cut-set bound, and codes that meet this bound with equality are said to support optimal repair of one or multiple failed nodes. We consider the problem of repairing multiple failed nodes of Reed-Solomon (RS) codes. In a recent work with I. Tamo (Proc. IEEE FOCS 2017), we gave the first explicit construction of RS codes with optimal repair of any single failed node from any subset of helper nodes. In this paper, we construct explicit RS codes that universally achieve the cut-set bound for the repair of any number of failed nodes from any set of helper nodes. Moreover, the node size of our codes is close to the optimal (smallest possible) node size of codes with such property." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1710.07216
1706.00112
B. Repairing Reed-Solomon codes
For fixed r and growing n, we have l " e p1`op1qqn log n , which is asymptotically the same as the result of #REFR .
[ "More precisely, given any n ą k, we construct explicit pn, kq RS codes with the universal ph, dq-optimal repair property for all h ď r and all k ď d ď n´h simultaneously.", "In other words, our codes can repair any number of failed nodes from any set of helper nodes with repair bandwidth achieving the cut-set bound.", "The value of sub-packetization l of our construction equals r! times the product of the first n distinct primes in an arithmetic progression,", "As in #OTHEREFR , we invoke classic results of analytic number theory to describe the behavior of (5) for large n.", "In particular, the prime number theorem in arithmetic progressions (for instance, [20, p.121] ) yields asymptotic estimates for l; see #OTHEREFR for a more detailed discussion." ]
[ "According to the lower bound (4), when the code rate k{n is close to 1, the sub-packetization value of our codes is close to the optimal value among all scalar linear MDS codes with the optimal repair property." ]
[ "e p1`op1qqn log" ]
result
{ "title": "Repairing Reed-Solomon codes: Universally achieving the cut-set bound for any number of erasures", "abstract": "The repair bandwidth of a code is the minimum amount of data required to repair one or several failed nodes (erasures). For MDS codes, the repair bandwidth is bounded below by the so-called cut-set bound, and codes that meet this bound with equality are said to support optimal repair of one or multiple failed nodes. We consider the problem of repairing multiple failed nodes of Reed-Solomon (RS) codes. In a recent work with I. Tamo (Proc. IEEE FOCS 2017), we gave the first explicit construction of RS codes with optimal repair of any single failed node from any subset of helper nodes. In this paper, we construct explicit RS codes that universally achieve the cut-set bound for the repair of any number of failed nodes from any set of helper nodes. Moreover, the node size of our codes is close to the optimal (smallest possible) node size of codes with such property." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1710.07216
1706.00112
Proof of Lemma 3:
The proof of the first part is the same as the proof of Theorem 4 in #REFR , and the new idea lies in the second part, where in particular we use transitivity of the trace mapping.
[ "Remark 2.", "Let us point out some new ingredients in the repair of multiple erasures compared to the repair of a single erasure #OTHEREFR .", "These ideas will be used in the next section where we present a scheme for repairing an arbitrary number of erasures.", "The first one appears in the proof of Lemma 1.", "The proof of Lemma 1 consists of two parts: in the first part we show that c i1 can be calculated from ttr K{Fi 1 pγv j c j q : γ P T i1 , j P Ru; in the second part we show that the elements in the set ttr K{Fi 1 pγv j c j q : γ P T i1 , j P Ru can be calculated from ttr K{F pγv j c j q : γ P S i1 , j P Ru." ]
[ "The other new ingredient is Lemma 3, where we calculate the dimension of the intersection.", "Similar calculations also allow us to achieve the cut-set bound for the repair of more than two erasures in the next section.", "Remark 3.", "Finally, consider the full subfield lattice ordered by inclusion, starting with the field F p as the root and ending with F as the unique maximal element, i.e., the subset lattice of the n-set tα 1 , α 2 , . . . , α n u.", "In the above repair scheme we relied on subfields of the form F (see (13)), i.e., those that contain all but two elements of this set." ]
[ "trace mapping" ]
method
{ "title": "Repairing Reed-Solomon codes: Universally achieving the cut-set bound for any number of erasures", "abstract": "The repair bandwidth of a code is the minimum amount of data required to repair one or several failed nodes (erasures). For MDS codes, the repair bandwidth is bounded below by the so-called cut-set bound, and codes that meet this bound with equality are said to support optimal repair of one or multiple failed nodes. We consider the problem of repairing multiple failed nodes of Reed-Solomon (RS) codes. In a recent work with I. Tamo (Proc. IEEE FOCS 2017), we gave the first explicit construction of RS codes with optimal repair of any single failed node from any subset of helper nodes. In this paper, we construct explicit RS codes that universally achieve the cut-set bound for the repair of any number of failed nodes from any set of helper nodes. Moreover, the node size of our codes is close to the optimal (smallest possible) node size of codes with such property." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1710.07216
1706.00112
APPENDIX B Proposition 2. For the set S i2 defined in (15), we have
The proof of (72) is exactly the same as the proof of (66) (also the same as the proof of Lemma 1 in #REFR ), and therefore we do not repeat it.
[ "Indeed, #OTHEREFR and (72) together imply that Span F pS i2 q`Span F pS i2 α i2 q`¨¨¨`Span F pS i2 α s2´1 i2", "q \" '", "where the third equality follows from the fact that the set 1, α i1 , . . . , α", "forms a basis of F over F i1 , and the last equality follows from the fact that the set 1, β, . . .", ", β s´1 forms a basis of K over F (see #OTHEREFR ). Thus the proposition indeed follows from (72)." ]
[ "where W i is defined in (41), and F ri´1s is defined in (44).", "Indeed, (47) and (73) where the third equality follows from (38); the fourth equality follows from the fact that for j \" 2, 3, . . . , h, the set 1, α j , . . .", ", α pj´1 j forms a basis of F rj´1s over F rjs and the fact that the set 1, α 1 , . . .", ", α p1´1 1 forms a basis of F over F r1s , and the last equality follows from (34). Thus the proposition indeed follows from (73).", "The proof of (73) is exactly the same as the proof of (66) (also the same as the proof of Lemma 1 in #OTHEREFR ), and therefore we do not repeat it." ]
[ "Lemma" ]
background
{ "title": "Repairing Reed-Solomon codes: Universally achieving the cut-set bound for any number of erasures", "abstract": "The repair bandwidth of a code is the minimum amount of data required to repair one or several failed nodes (erasures). For MDS codes, the repair bandwidth is bounded below by the so-called cut-set bound, and codes that meet this bound with equality are said to support optimal repair of one or multiple failed nodes. We consider the problem of repairing multiple failed nodes of Reed-Solomon (RS) codes. In a recent work with I. Tamo (Proc. IEEE FOCS 2017), we gave the first explicit construction of RS codes with optimal repair of any single failed node from any subset of helper nodes. In this paper, we construct explicit RS codes that universally achieve the cut-set bound for the repair of any number of failed nodes from any set of helper nodes. Moreover, the node size of our codes is close to the optimal (smallest possible) node size of codes with such property." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1806.00496
1706.00112
Remark 2.
It should be noted that #REFR proposed an asymptoticly tight sub-packetization bound for MSR scalar code.
[]
[ "Nevertheless, the work in our paper can be used as a a technique that converts a vector MSR code into a scalar MSR code." ]
[ "packetization" ]
background
{ "title": "A tradeoff between the sub-packetization size and the repair bandwidth for reed-solomon code", "abstract": "codes are widely used in practical storage systems but their repair bandwidth characterization is still an open problem. RS codes can be viewed as the evaluations of a polynomial over a finite field. Recently, Guruswami and Wootters proposed a repair method for RS codes when the evaluation points are the entire field. Tamo, Ye and Barg achieved the minimum storage regenerating (MSR) bound when the sub-packetization size grows faster than the exponential function of the size of the evaluation points. In this work, we extend these results to accommodate different sizes of the evaluation points. Our schemes provide a flexible tradeoff between the sub-packetization size and the repair bandwidth. In addition, we present a technique for the sub-packetization bound of scalar MSR codes, based on the dimension of some constructed vector space." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1911.12525
1706.00112
I. INTRODUCTION
After downloading #REFR , C 1 is able to recover c 1,b,a : a P t0, 1, . . . , s n´1 u, b P t1, 2, . . .
[ "In this section, we use ' to denote addition modulo s.", "In the first round of the repair process, the first node C 1 downloads ! s´2 ÿ j\"0 c i,j`1,ap1,a1'jq`ci,s,ap1,a1'ps´1qq : a P t0, 1, . . . , s n´1 u, i P R )", "from the helper nodes, and the second node C 2 downloads ! s´2 ÿ j\"0 c i,j`1,ap2,a2'jq`ci,s`1,ap2,a2'ps´1qq : a P t0, 1, . . . , s n´1 u, i P R )", "from the helper nodes.", "Lemma 2." ]
[ ", s´1, su ( ď s´2 ÿ j\"0 c 2,j`1,ap1,a1'jq`c2,s,ap1,a1'ps´1qq : a P t0, 1, . . . , s n´1 u ( .", "After downloading (8), C 2 is able to recover c 2,b,a : a P t0, 1, . . . , s n´1 u, b P t1, 2, . . .", ", s´1, s`1u ( ď s´2 ÿ j\"0 c 1,j`1,ap2,a2'jq`c1,s`1,ap2,a2'ps´1qq : a P t0, 1, . . . , s n´1 u ( .", "Proof.", "We only prove the first part of this lemma, and the second part can be proved in the same way." ]
[ "1,b" ]
background
{ "title": "New constructions of cooperative MSR codes: Reducing node size to exppOpnqq", "abstract": "We consider the problem of multiple-node repair in distributed storage systems under the cooperative model, where the repair bandwidth includes the amount of data exchanged between any two different storage nodes. Recently, explicit constructions of MDS codes with optimal cooperative repair bandwidth for all possible parameters were given by Ye and Barg (IEEE Transactions on Information Theory, 2019). The node size (or sub-packetization) in this construction scales as exppΘpn h qq, where h is the number of failed nodes and n is the code length. In this paper, we give new explicit constructions of optimal MDS codes for all possible parameters under the cooperative model, and the node size of our new constructions only scales as exppOpnqq for any number of failed nodes. Furthermore, it is known that any optimal MDS code under the cooperative model (including, in particular, our new code construction) also achieves optimal repair bandwidth under the centralized model, where the amount of data exchanged between failed nodes is not included in the repair bandwidth. We further show that the node size of our new construction is also much smaller than that of the best known MDS code constructions for the centralized model." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1907.03931
1706.00112
I. INTRODUCTION
In a beautiful work, Tamo, Ye, and Barg #REFR constructed Reed-Solomon codes that are MSR, i.e., admit repair schemes with repair bandwidth meeting the cutset bound.
[ "For a given i ∈ {1, · · · , n}, the following statements are equivalent.", "(1) There is a linear repair scheme of node c i over F q such that the repair bandwidth N (", "Using their characterization, the authors of #OTHEREFR constructs a family of RS codes with low sub-packetization l = log n/r n and a repair scheme with optimal repair bandwidth in this regime.", "However, the repair bandwidth is much higher than the cut-set bound, which can only be achieved for large sub-packetization.", "Subsequent work #OTHEREFR , #OTHEREFR generalize the results in #OTHEREFR but none of their results approaches the cut-set bound." ]
[ "The sub-packetization is huge, l = n O(n) , but they also prove a lower bound of k Ω(k) for scalar MDS codes, which is even higher than the exponential lower bound for general MSR (vector MDS) codes.", "Given the large sub-packetization of MSR codes which is not suitable for practical applications, Guruswami and Rawat #OTHEREFR proposed and studied codes that trade-off repair bandwidth with sub-packetization, They constructed codes with sub-packetization l as small as r = n − k with repair bandwdith at most twice the cut-set bound, and l ≈ r 1/ with repair bandwidth at most (1 + ) times the cut-set bound, i.e., bounded by (1 + )(n − 1)l/r.", "In a later work, Rawat et al #OTHEREFR propose -MSR codes where the download from each helper node is at most (1 + )l/r (so there is also load balancing across nodes).", "They also construct -MSR codes with sub-packetization of r O(r/ ) log n by combining short MSR codes with long codes of large relative minimum distance.", "Definition I.5 ( -MSR code)." ]
[ "Reed-Solomon codes" ]
background
{ "title": "Near-optimal Repair of Reed-Solomon Codes with Low Sub-packetization", "abstract": "Minimum storage regenerating (MSR) codes are MDS codes which allow for recovery of any single erased symbol with optimal repair bandwidth, based on the smallest possible fraction of the contents downloaded from each of the other symbols. Recently, certain Reed-Solomon codes were constructed which are MSR. However, the sub-packetization of these codes is exponentially large, growing like n Ω(n) in the constant-rate regime. In this work, we study the relaxed notion of -MSR codes, which incur a factor of (1 + ) higher than the optimal repair bandwidth, in the context of Reed-Solomon codes. We give constructions of constant-rate -MSR Reed-Solomon codes with polynomial sub-packetization of n O(1/ ) and thereby giving an explicit tradeoff between the repair bandwidth and sub-packetization." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1907.03931
1706.00112
I. INTRODUCTION
The constructions in this paper rely on the technique of picking multiple prime numbers introduced in #REFR . Moreover, we conjecture that this tradeoff is essentially tight.
[ "Let C be an [n, k] (scalar or vector) MDS code with sub-packetization l over some base field F q .", "It is said to be -MSR if for every i ∈ [n], we have N (C, i, [n] \\ {i}) ≤ (1 + ) · (n−1)l n−k , with each node returning at most (1+ )· l n−k symbols of F q during the repair process.", "Given the recent developments on MSR Reed-Solomon codes (with large sub-packetization) and -MSR codes (with low sub-packetization), a natural question that arises is whether we can combine the benefits of both these lines of work, and obtain -MSR Reed-Solomon codes with low sub-packetization. This is precisely the question addressed in this work.", "In this paper, we provide a partial answer to this question by constructing two families of RS codes that achieve small repair bandwidth using polynomial sub-packetization in the constant rate regime of k = Θ(n).", "(Our constructions also work beyond this regime, but we will be focusing on the tradeoff in the constant rate regime for simplicity.)" ]
[ "Conjecture I.6 (Tradeoff between repair bandwidth and sub-packetization).", "Any [n, k] -MSR RS code has subpacketization (n − k) Ω(1/ ) and this is tight up to a constant factor in the exponent.", "Remark I.7.", "In our constructions, the number of helper nodes from which one needs to download information to repair the failed node might be smaller than n − 1.", "Nevertheless, we are comparing our repair bandwidth with the cut-set bound in Definition I.2 where the number of helper nodes is n − 1, which is the smallest possible." ]
[ "constructions", "multiple prime numbers" ]
method
{ "title": "Near-optimal Repair of Reed-Solomon Codes with Low Sub-packetization", "abstract": "Minimum storage regenerating (MSR) codes are MDS codes which allow for recovery of any single erased symbol with optimal repair bandwidth, based on the smallest possible fraction of the contents downloaded from each of the other symbols. Recently, certain Reed-Solomon codes were constructed which are MSR. However, the sub-packetization of these codes is exponentially large, growing like n Ω(n) in the constant-rate regime. In this work, we study the relaxed notion of -MSR codes, which incur a factor of (1 + ) higher than the optimal repair bandwidth, in the context of Reed-Solomon codes. We give constructions of constant-rate -MSR Reed-Solomon codes with polynomial sub-packetization of n O(1/ ) and thereby giving an explicit tradeoff between the repair bandwidth and sub-packetization." }
{ "title": "Optimal Repair of Reed-Solomon Codes: Achieving the Cut-Set Bound", "abstract": "The repair problem for an pn, kq error-correcting code calls for recovery of an unavailable coordinate of the codeword by downloading as little information as possible from a subset of the remaining coordinates. Using the terminology motivated by coding in distributed storage, we attempt to repair a failed node by accessing information stored on d helper nodes, where k ď d ď n´1, and using as little repair bandwidth as possible to recover the lost information. By the so-called cut-set bound (Dimakis et al., 2010), the repair bandwidth of an pn, k \" n´rq MDS code using d helper nodes is at least dl{pd`1´kq, where l is the size of the node. A number of constructions of MDS array codes have been shown to meet this bound with equality. In a related but separate line of work, Guruswami and Wootters (2016) studied repair of Reed-Solomon (RS) codes, showing that it is possible to perform repair using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality, which has been an open problem in coding theory. In this work we present a solution to this problem, constructing RS codes of length n over the field of size q l , l \" exppp1`op1qqn log nq that meet the cut-set bound. We also prove an almost matching lower bound on l, showing that super-exponential scaling is both necessary and sufficient for achieving the cut-set bound using linear repair schemes. More precisely, we prove that for scalar MDS codes (including the RS codes) to meet this bound, the sub-packetization l must satisfy l ě exppp1`op1qqk log kq." }
1901.09118
1409.5165
D. Analysis of Aggressiveness of Stopping Method
Note that all parameters for all methods were kept the same as conducted in the experiments of Bloodgood and Vijay-Shanker #REFR . Table IX and Table X show these results.
[ "The goal of the experiment reported in this section is to see how our new stopping method compares with previous stopping methods.", "We define the optimal ordered pair (optimalA, optimalP ) where optimalP is the maximum performance over the entire learning process minus ǫ 2 .", "The purpose of ǫ 2 was so that a small increase at the end of the training wouldn't count that iteration as an ideal stopping point. For all our experiments,", "optimalA is the number of annotations at the earliest iteration that achieves a minimum performance of optimalP .", "The metrics used to evaluate the aggressiveness of a stopping method are: The predicted change of F with ǫ = 0.005 and k = 1 (PCF[0.005]) was compared to margin exhaustion (ME) #OTHEREFR , drop in confidence (DC) #OTHEREFR , confidence convergence (CC) #OTHEREFR , min-error with classification change (MECC) #OTHEREFR , stabilizing predictions (SP) #OTHEREFR , and the extended variance model (EVM) #OTHEREFR ." ]
[ "When looking at the percentage of performance achieved, predicted change of F typically scored very high and achieved at least 90% of optimal performance on most datasets.", "When looking at each dataset individually, predicted change of F achieved at least 95% of optimal performance on all datasets except WebKB -projects.", "It is important to note that most of the other stopping methods also struggled to achieve optimal performance on this dataset and category.", "When looking at the percentage of annotations used, predicted change of F typically achieved a score very close to 1.00.", "This means that predicted change of F was relatively aggressive." ]
[ "methods" ]
result
{ "title": "Stopping Active Learning based on Predicted Change of F Measure for Text Classification", "abstract": "During active learning, an effective stopping method allows users to limit the number of annotations, which is cost effective. In this paper, a new stopping method called Predicted Change of F Measure will be introduced that attempts to provide the users an estimate of how much performance of the model is changing at each iteration. This stopping method can be applied with any base learner. This method is useful for reducing the data annotation bottleneck encountered when building text classification systems." }
{ "title": "A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping", "abstract": "A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior." }
2001.10337
1409.5165
D. Impact of Base Learner on Forecasting Performance
The stopping percent automatically determined during active learning by the Stabilizing Predictions (SP) method from #REFR is shown by the SP vertical line.
[ "This section shows the impact of the base learner (SVM, decision tree, neural network) used during iterative learning.", "Results are only shown for Accuracy as F-Measure curves 6 : Learning Curve using active learning with SVM and closest-to-hyperplane sampling on the TREC dataset.", "The T P C is set to 10%, about the earliest the current state-of-theart can be set to without sacrificing large amounts of forecast quality." ]
[ "represented similar results.", "Figure 7 shows the overall average difference of the forecasts for the different base machine learning models for varying T P C values.", "As shown, decision tree classifiers are the easiest to forecast, neural network classifiers are the hardest to forecast, and SVM classifiers are in the middle." ]
[ "active learning" ]
method
{ "title": "Early Forecasting of Text Classification Accuracy and F-Measure with Active Learning", "abstract": "When creating text classification systems, one of the major bottlenecks is the annotation of training data. Active learning has been proposed to address this bottleneck using stopping methods to minimize the cost of data annotation. An important capability for improving the utility of stopping methods is to effectively forecast the performance of the text classification models. Forecasting can be done through the use of logarithmic models regressed on some portion of the data as learning is progressing. A critical unexplored question is what portion of the data is needed for accurate forecasting. There is a tension, where it is desirable to use less data so that the forecast can be made earlier, which is more useful, versus it being desirable to use more data, so that the forecast can be more accurate. We find that when using active learning it is even more important to generate forecasts earlier so as to make them more useful and not waste annotation effort. We investigate the difference in forecasting difficulty when using accuracy and F-measure as the text classification system performance metrics and we find that F-measure is more difficult to forecast. We conduct experiments on seven text classification datasets in different semantic domains with different characteristics and with three different base machine learning algorithms. We find that forecasting is easiest for decision tree learning, moderate for Support Vector Machines, and most difficult for neural networks." }
{ "title": "A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping", "abstract": "A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior." }
1504.06329
1409.5165
Variance of Kappa Estimator
Table 3 shows the variances ofK computed using Equation 5 at the points at which SP stopped AL for each of the datasets 4 from #REFR .
[ "Then the Kappa measure of agreement K between M t and M t−1 (see Equation 3) is estimated byK Using the delta method, as described in #OTHEREFR , #OTHEREFR derived an estimator of the large-sample variance ofK.", "According to #OTHEREFR , the estimator simplifies to", ", we can see that the variance of our estimate of Kappa is inversely proportional to the size of the stop set we use.", "Bloodgood and Vijay-Shanker (2009a) used a stop set of size 2000 for each of their datasets.", "Although this worked well in the results they reported, we do not believe that 2000 is a fixed size that will work well for all tasks and datasets where the SP method could be used." ]
[ "These variances indicate that the size of 2000 was typically sufficient to get tight estimates of Kappa, helping to illuminate the empirical success of the SP method on these datasets.", "More generally, the SP method can be augmented with a variance check: if the variance of estimated Kappa at a potential stopping point exceeds some desired threshold, then the stop set size can be increased as needed to reduce the variance.", "Looking at Equation 5 again, one can note that when p e is relatively close to 1, the variance ofK can be expected to get quite large.", "In these situations, users of SP should expect to have to use larger stop set sizes and in extreme conditions, SP may not be an advisable method to use." ]
[ "datasets" ]
method
{ "title": "Analysis of Stopping Active Learning based on Stabilizing Predictions", "abstract": "Within the natural language processing (NLP) community, active learning has been widely investigated and applied in order to alleviate the annotation bottleneck faced by developers of new NLP systems and technologies. This paper presents the first theoretical analysis of stopping active learning based on stabilizing predictions (SP). The analysis has revealed three elements that are central to the success of the SP method: (1) bounds on Cohen's Kappa agreement between successively trained models impose bounds on differences in F-measure performance of the models; (2) since the stop set does not have to be labeled, it can be made large in practice, helping to guarantee that the results transfer to previously unseen streams of examples at test/application time; and (3) good (low variance) sample estimates of Kappa between successive models can be obtained. Proofs of relationships between the level of Kappa agreement and the difference in performance between consecutive models are presented. Specifically, if the Kappa agreement between two models exceeds a threshold T (where T > 0), then the difference in F-measure performance between those models is bounded above by T in all cases. If precision of the positive conjunction of the models is assumed to be p, then the bound can be tightened to 4(1−T ) (p+1)T ." }
{ "title": "A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping", "abstract": "A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior." }
1409.4835
1409.5165
Evaluation
This area is important because this is around where we would want to stop AL #REFR .
[ "We implemented oversampling by duplicating points and by BootOS #OTHEREFR .", "To avoid cluttering the graphs, we only show the highest-performing oversampling variant, which was by duplicating points.", "Learning curves are presented in Figures 4 and 5 .", "Note InitPA is the highest-performing method for all datasets, especially in the practically important area of where the learning curves begin to plateau.", "Oversampling (Zhu and Hovy,2007) CurrentPA EHG2007 #OTHEREFR Figure 5: Reuters learning curves. y-axis is from 76% to 83%." ]
[ "Observe that the gains of InitPA over CurrentPA are smaller for Reuters.", "For some Reuters categories, InitPA and CurrentPA have nearly identical performance.", "Applying the models learned by Cur-rentPA at each round of AL on the data used to train the model reveals that the recall on the training data is nearly 100% for those categories where InitPA/CurrentPA perform similarly.", "Increasing the relative penalty for slack error on positive training points will not have much impact if (nearly) all of the pos train points are already classified correctly.", "Thus, in situations where models are already achieving nearly 100% recall on their train data, InitPA is not expected to outperform CurrentPA." ]
[ "area" ]
background
{ "title": "Taking into Account the Differences between Actively and Passively Acquired Data: The Case of Active Learning with Support Vector Machines for Imbalanced Datasets", "abstract": "Actively sampled data can have very different characteristics than passively sampled data. Therefore, it's promising to investigate using different inference procedures during AL than are used during passive learning (PL). This general idea is explored in detail for the focused case of AL with cost-weighted SVMs for imbalanced data, a situation that arises for many HLT tasks. The key idea behind the proposed InitPA method for addressing imbalance is to base cost models during AL on an estimate of overall corpus imbalance computed via a small unbiased sample rather than the imbalance in the labeled training data, which is the leading method used during PL." }
{ "title": "A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping", "abstract": "A survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior." }
2001.08244
1701.07169
Introduction
In #REFR , a finite-difference discretization is proposed that results in Lagrangian velocity fields that are solenoidal, but the method is limited to periodic domains.
[ "In the case of discretizations based on finite differences or finite volumes, the dominant error is due to the fact that the Lagrangian velocity obtained from the Eulerian velocity using discretized delta functions is far from being a solenoidal field even if the Eulerian velocity is properly enforced to be divergence-free with respect to the finite-difference/finite-volume approximation of the divergence operator #OTHEREFR .", "In the case of discretizations based on finite elements, the dominant error is due to the fact that the weakly divergence-free Eulerian velocity is far from being a solenoidal field #OTHEREFR .", "Most of the proposed solutions to reduce the incompressibility errors do not tackle the aforementioned root causes and try to mitigate their effects instead.", "To name a few, penalty terms are often added to try to decrease the spurious change of fluid volume inside closed co-dimension one solids #OTHEREFR , a post-processing correction using a Lagrange multiplier is used after each time step to slightly change the nodal coordinates of closed co-dimension one solids to preserve their inner volume #OTHEREFR , and extremely large grad-div stabilization is added near the fluid-solid interface to obtain a velocity field that is closer to a solenoidal field in this region #OTHEREFR .", "There are some recent solutions that effectively tackle the aforementioned root causes, but compromise the applicability of the resultant numerical method." ]
[ "In #OTHEREFR , an extended finite-element discretization is proposed that leads to weakly divergence-free Eulerian velocities that are good approximations of a solenoidal field by capturing the pressure discontinuity at the fluid-solid interface, but the method has only been developed for two-dimensional settings thus far.", "Since the advent of isogeometric analysis (IGA) #OTHEREFR , spline-based discretizations of immersed approaches for FSI problems have proliferated #OTHEREFR .", "This includes NURBS-based and T-spline-based generalizations of the IB method #OTHEREFR and the FD method #OTHEREFR , which are two of the most widespread immersed approaches for challenging FSI applications.", "Unfortunately, the issue found in classical finite-element discretizations of weakly divergence-free Eulerian velocities not being a good approximation to a solenoidal field persists in spline-based discretizations #OTHEREFR .", "A definitive solution to this problem is to use Eulerian discretizations that result in pointwise satisfaction of the incompressibility constraint." ]
[ "finite-difference discretization" ]
method
{ "title": "The divergence-conforming immersed boundary method: Application to vesicle and capsule dynamics", "abstract": "We extend the recently introduced divergence-conforming immersed boundary (DCIB) method [1] to fluid-structure interaction (FSI) problems involving closed co-dimension one solids. We focus on capsules and vesicles, whose discretization is particularly challenging due to the higher-order derivatives that appear in their formulations. In two-dimensional settings, we employ cubic B-splines with periodic knot vectors to obtain discretizations of closed curves with C 2 inter-element continuity. In three-dimensional settings, we use analysis-suitable bi-cubic T-splines to obtain discretizations of closed surfaces with at least C 1 inter-element continuity. Large spurious changes of the fluid volume inside closed codimension one solids is a well-known issue for IB methods. The DCIB method results in volume changes orders of magnitude lower than conventional IB methods. This is a byproduct of discretizing the velocity-pressure pair with divergence-conforming B-splines, which lead to negligible incompressibility errors at the Eulerian level. The higher interelement continuity of divergence-conforming B-splines is also crucial to avoid the quadrature/interpolation errors of IB methods becoming the dominant discretization error. Benchmark and application problems of vesicle and capsule dynamics are solved, including meshindependence studies and comparisons with other numerical methods. (Hugo Casquero) arXiv:2001.08244v1 [physics.flu-dyn] 22 Jan 2020 * In [57], inf-sup stable, pointwise divergence-free, H 1 -conforming, and pressure-robust tetrahedral elements on simplicial triangulations are constructed. The pressure space is simply the space of piecewise constants and the velocity space consists of piecewise cubic polynomials enriched with rational functions. Although these tetrahedral elements do not have the higher inter-element continuity of divergence-conforming B-splines, their use in immersed approaches for FSI is also worth consideration." }
{ "title": "An Immersed Boundary Method with Divergence-Free Velocity Interpolation and Force Spreading", "abstract": "The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C 1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and * Corresponding author" }
2001.08244
1701.07169
Closed curve with active behavior
The incompressibility test established in #REFR consists in measuring e V C during the time interval t ∈ [0.0 s, 10.0 s] using 128 × 128 Eulerian elements, 328 Lagrangian elements, and time step ∆t = 3.90625e-4 s.
[ "We start with a coarse discretization, namely, 32 × 32 Eulerian elements with k = 2, 82 Lagrangian elements with p = 2, and time step ∆t = 1.5625e-3 s.", "After that, the discretization is refined by performing uniform hrefinement three times on the Lagrangian and Eulerian meshes and dividing the time step by two each time a new level of refinement is introduced.", "A converged result is obtained as we increase the resolution.", "In #OTHEREFR , (t) is computed for a fixed discretization, namely, 128 × 128 Eulerian elements, 328 Lagrangian elements, and time step ∆t = 3.90625e-4 s; this resolution coincides with our resolution after two refinements. The result from #OTHEREFR is included in Fig. 3. Figs.", "4 (a)-(d) show the rather complex velocity patterns created by the perturbed circle with periodic stiffness." ]
[ "With this discretization, h L ≈ h E /2 and ∆t = h E /10.", "In #OTHEREFR , the test is solved using three IB methods based on finite differences, namely, the DFIB method proposed in #OTHEREFR , the IBmodified method proposed in #OTHEREFR and the IBMAC method proposed in #OTHEREFR . These results are included in Fig.", "5 (a) together with the results obtained using the DCIB method with k = p = 2.", "The DCIB method is more than three orders of magnitude more accurate than the IBMAC and IBModified methods.", "The DCIB method is more than two times more accurate than the DFIB method." ]
[ "incompressibility test" ]
method
{ "title": "The divergence-conforming immersed boundary method: Application to vesicle and capsule dynamics", "abstract": "We extend the recently introduced divergence-conforming immersed boundary (DCIB) method [1] to fluid-structure interaction (FSI) problems involving closed co-dimension one solids. We focus on capsules and vesicles, whose discretization is particularly challenging due to the higher-order derivatives that appear in their formulations. In two-dimensional settings, we employ cubic B-splines with periodic knot vectors to obtain discretizations of closed curves with C 2 inter-element continuity. In three-dimensional settings, we use analysis-suitable bi-cubic T-splines to obtain discretizations of closed surfaces with at least C 1 inter-element continuity. Large spurious changes of the fluid volume inside closed codimension one solids is a well-known issue for IB methods. The DCIB method results in volume changes orders of magnitude lower than conventional IB methods. This is a byproduct of discretizing the velocity-pressure pair with divergence-conforming B-splines, which lead to negligible incompressibility errors at the Eulerian level. The higher interelement continuity of divergence-conforming B-splines is also crucial to avoid the quadrature/interpolation errors of IB methods becoming the dominant discretization error. Benchmark and application problems of vesicle and capsule dynamics are solved, including meshindependence studies and comparisons with other numerical methods. (Hugo Casquero) arXiv:2001.08244v1 [physics.flu-dyn] 22 Jan 2020 * In [57], inf-sup stable, pointwise divergence-free, H 1 -conforming, and pressure-robust tetrahedral elements on simplicial triangulations are constructed. The pressure space is simply the space of piecewise constants and the velocity space consists of piecewise cubic polynomials enriched with rational functions. Although these tetrahedral elements do not have the higher inter-element continuity of divergence-conforming B-splines, their use in immersed approaches for FSI is also worth consideration." }
{ "title": "An Immersed Boundary Method with Divergence-Free Velocity Interpolation and Force Spreading", "abstract": "The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C 1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and * Corresponding author" }
2001.08244
1701.07169
Closed curve with active behavior
Furthermore, the DFIB method can only handle periodic boundary conditions as explained by the authors in #REFR while the DCIB method can handle Dirichlet and Neumann boundary conditions as well.
[ "With this discretization, h L ≈ h E /2 and ∆t = h E /10.", "In #OTHEREFR , the test is solved using three IB methods based on finite differences, namely, the DFIB method proposed in #OTHEREFR , the IBmodified method proposed in #OTHEREFR and the IBMAC method proposed in #OTHEREFR . These results are included in Fig.", "5 (a) together with the results obtained using the DCIB method with k = p = 2.", "The DCIB method is more than three orders of magnitude more accurate than the IBMAC and IBModified methods.", "The DCIB method is more than two times more accurate than the DFIB method." ]
[ "Therefore, the DCIB method is as flexible as a conventional IB method, e.g., IBMAC method, and it is able to impose the incompressibility constraint accurately at the same time.", "In a conventional IB method, the incompressibility error at the Lagrangian level does not decrease if the Lagrangian discretization and the time discretization are refined for a fixed Eulerian discretization #OTHEREFR .", "In #OTHEREFR , it was shown that the DFIB method is able to overcome this limitation.", "The authors used a fixed Eulerian mesh with 128 × 128 elements and showed second-order convergence of e V C as the time discretization and the Lagrangian discretization were refined three times. The results from #OTHEREFR are plotted in Fig. 5 (b) .", "We now pick a coarse Eulerian mesh with 32 × 32 elements and the time and Lagrangian discretizations are refined five times." ]
[ "periodic boundary conditions", "Neumann boundary conditions" ]
method
{ "title": "The divergence-conforming immersed boundary method: Application to vesicle and capsule dynamics", "abstract": "We extend the recently introduced divergence-conforming immersed boundary (DCIB) method [1] to fluid-structure interaction (FSI) problems involving closed co-dimension one solids. We focus on capsules and vesicles, whose discretization is particularly challenging due to the higher-order derivatives that appear in their formulations. In two-dimensional settings, we employ cubic B-splines with periodic knot vectors to obtain discretizations of closed curves with C 2 inter-element continuity. In three-dimensional settings, we use analysis-suitable bi-cubic T-splines to obtain discretizations of closed surfaces with at least C 1 inter-element continuity. Large spurious changes of the fluid volume inside closed codimension one solids is a well-known issue for IB methods. The DCIB method results in volume changes orders of magnitude lower than conventional IB methods. This is a byproduct of discretizing the velocity-pressure pair with divergence-conforming B-splines, which lead to negligible incompressibility errors at the Eulerian level. The higher interelement continuity of divergence-conforming B-splines is also crucial to avoid the quadrature/interpolation errors of IB methods becoming the dominant discretization error. Benchmark and application problems of vesicle and capsule dynamics are solved, including meshindependence studies and comparisons with other numerical methods. (Hugo Casquero) arXiv:2001.08244v1 [physics.flu-dyn] 22 Jan 2020 * In [57], inf-sup stable, pointwise divergence-free, H 1 -conforming, and pressure-robust tetrahedral elements on simplicial triangulations are constructed. The pressure space is simply the space of piecewise constants and the velocity space consists of piecewise cubic polynomials enriched with rational functions. Although these tetrahedral elements do not have the higher inter-element continuity of divergence-conforming B-splines, their use in immersed approaches for FSI is also worth consideration." }
{ "title": "An Immersed Boundary Method with Divergence-Free Velocity Interpolation and Force Spreading", "abstract": "The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C 1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and * Corresponding author" }
2001.08244
1701.07169
Closed curve with active behavior
In #REFR , it was shown that the DFIB method is able to overcome this limitation.
[ "The DCIB method is more than three orders of magnitude more accurate than the IBMAC and IBModified methods.", "The DCIB method is more than two times more accurate than the DFIB method.", "Furthermore, the DFIB method can only handle periodic boundary conditions as explained by the authors in #OTHEREFR while the DCIB method can handle Dirichlet and Neumann boundary conditions as well.", "Therefore, the DCIB method is as flexible as a conventional IB method, e.g., IBMAC method, and it is able to impose the incompressibility constraint accurately at the same time.", "In a conventional IB method, the incompressibility error at the Lagrangian level does not decrease if the Lagrangian discretization and the time discretization are refined for a fixed Eulerian discretization #OTHEREFR ." ]
[ "The authors used a fixed Eulerian mesh with 128 × 128 elements and showed second-order convergence of e V C as the time discretization and the Lagrangian discretization were refined three times. The results from #OTHEREFR are plotted in Fig. 5 (b) .", "We now pick a coarse Eulerian mesh with 32 × 32 elements and the time and Lagrangian discretizations are refined five times.", "Since divergence-conforming B-splines lead to negligible incompressibility errors at the Eulerian level as long as the final linear system of equations is solved accurately (no matter how coarse the Eulerian mesh is), the DCIB method is able to decrease e V C with second-order convergence as shown in Fig.", "5 (b) , thus overcoming the limitation of conventional IB methods as well.", "In order to further show that the incompressibility error at the Lagrangian level with the DCIB method shown in Fig." ]
[ "DFIB method" ]
method
{ "title": "The divergence-conforming immersed boundary method: Application to vesicle and capsule dynamics", "abstract": "We extend the recently introduced divergence-conforming immersed boundary (DCIB) method [1] to fluid-structure interaction (FSI) problems involving closed co-dimension one solids. We focus on capsules and vesicles, whose discretization is particularly challenging due to the higher-order derivatives that appear in their formulations. In two-dimensional settings, we employ cubic B-splines with periodic knot vectors to obtain discretizations of closed curves with C 2 inter-element continuity. In three-dimensional settings, we use analysis-suitable bi-cubic T-splines to obtain discretizations of closed surfaces with at least C 1 inter-element continuity. Large spurious changes of the fluid volume inside closed codimension one solids is a well-known issue for IB methods. The DCIB method results in volume changes orders of magnitude lower than conventional IB methods. This is a byproduct of discretizing the velocity-pressure pair with divergence-conforming B-splines, which lead to negligible incompressibility errors at the Eulerian level. The higher interelement continuity of divergence-conforming B-splines is also crucial to avoid the quadrature/interpolation errors of IB methods becoming the dominant discretization error. Benchmark and application problems of vesicle and capsule dynamics are solved, including meshindependence studies and comparisons with other numerical methods. (Hugo Casquero) arXiv:2001.08244v1 [physics.flu-dyn] 22 Jan 2020 * In [57], inf-sup stable, pointwise divergence-free, H 1 -conforming, and pressure-robust tetrahedral elements on simplicial triangulations are constructed. The pressure space is simply the space of piecewise constants and the velocity space consists of piecewise cubic polynomials enriched with rational functions. Although these tetrahedral elements do not have the higher inter-element continuity of divergence-conforming B-splines, their use in immersed approaches for FSI is also worth consideration." }
{ "title": "An Immersed Boundary Method with Divergence-Free Velocity Interpolation and Force Spreading", "abstract": "The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C 1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and * Corresponding author" }
1301.4008
1208.4734
Observation 7
We will also need the following recent result by Caro and Hansberg #REFR who established the following lower bound on the k-independence number of a graph.
[ "For a graph G of order n and an integer t ≥ 0, we have α t (G) + τ t (G) = n.", "We recall the following well-known Caro-Wei lower bound on the independence number in terms of the degree sequence of the graph." ]
[ "We begin by establishing the following upper bound on the simultaneous domination number of a combined graph in terms of the t-vertex cover number and also in terms of the sum of the average degrees from each factor.", ". .", ", F k are regular factors on n vertices each of degree δ, then", "Proof. Let G = G(F 1 , . . .", ", F k ) denote the combined graph of the factoring F 1 , F 2 , . . ." ]
[ "k-independence number" ]
background
{ "title": "Simultaneous Domination in Graphs", "abstract": "Let F 1 , F 2 , . . . , F k be graphs with the same vertex set V . A subset S ⊆ V is a simultaneous dominating set if for every i, 1 ≤ i ≤ k, every vertex of F i not in S is adjacent to a vertex in S in F i ; that is, the set S is simultaneously a dominating set in each graph F i . The cardinality of a smallest such set is the simultaneous domination number. We present general upper bounds on the simultaneous domination number. We investigate bounds in special cases, including the cases when the factors, F i , are r-regular or the disjoint union of copies of K r . Further we study the case when each factor is a cycle." }
{ "title": "New approach to the $k$-independence number of a graph", "abstract": "Let G = (V, E) be a graph and k ≥ 0 an integer. A k-independent set S ⊆ V is a set of vertices such that the maximum degree in the graph induced by S is at most k. With α k (G) we denote the maximum cardinality of a k-independent set of G. We prove that, for a graph G on n vertices and average degree d, α k (G) ≥ k+1 ⌈d⌉+k+1 n, improving the hitherto best general lower bound due to Caro and Tuza [Improved lower bounds on k-independence, J. Graph Theory 15 (1991), 99-107]." }
1803.07042
1208.4734
Introduction
Caro and Hansberg #REFR use the term 'k-independence number' to denote the maximum size of a set of vertices in a graph whose induced subgraph has maximum degree k. Thus, α 0 is the usual independence number.
[ "In this paper we generalize and improve the known spectral upper bounds for the kindependence number from #OTHEREFR and #OTHEREFR .", "For some cases, we also show that our bounds are sharp.", "As far as we are aware, there seems to be some conflict in the existing literature regarding the use of the term 'k-independence number'.", "The following list contains the three conflicting definitions, which all, nonetheless, are a natural generalization of the concept of independence number.", "1." ]
[ "2.Špacapan #OTHEREFR uses 'k-independence number' to denote the size of the largest kcolourable subgraph of G.", "With this notation, α 1 stands for the usual k-independence number of G.", "3.", "Fiol #OTHEREFR and Abiad, Tait, and Cioabȃ #OTHEREFR use 'k-independence number' to denote the size of the largest set of vertices such that any two vertices in the set are at distance larger than k.", "The latter definition is the one we use in this work." ]
[ "term 'k-independence number" ]
background
{ "title": "On the $k$-independence number of graphs", "abstract": "This paper generalizes and unifies the existing spectral bounds on the k-independence number of a graph, which is the maximum size of a set of vertices at pairwise distance greater than k. The previous bounds known in the literature follow as a corollary of the main results in this work. We show that for most cases our bounds outperform the previous known bounds. Some infinite graphs where the bounds are tight are also presented. Finally, as a byproduct, we derive some lower spectral bounds for the diameter of a graph." }
{ "title": "New approach to the $k$-independence number of a graph", "abstract": "Let G = (V, E) be a graph and k ≥ 0 an integer. A k-independent set S ⊆ V is a set of vertices such that the maximum degree in the graph induced by S is at most k. With α k (G) we denote the maximum cardinality of a k-independent set of G. We prove that, for a graph G on n vertices and average degree d, α k (G) ≥ k+1 ⌈d⌉+k+1 n, improving the hitherto best general lower bound due to Caro and Tuza [Improved lower bounds on k-independence, J. Graph Theory 15 (1991), 99-107]." }
1509.08960
1207.5777
INTRODUCTION
TGI builds upon our prior work on DeltaGraph #REFR , where the focus was on retrieving individual snapshots efficiently; we discuss the differences between the two in more detail in Section 4.
[ "In this paper, we present a graph data management system, called Historical Graph Store (HGS) , that provides an ecosystem for managing and analyzing large historical traces of graphs. HGS consists of two key distinct components.", "First, the Temporal Graph Index (TGI), is an index that compactly stores the entire history of a graph by appropriately partitioning and encoding the differences over time (called deltas).", "These deltas are organized to optimize the retrieval of several temporal graph primitives such as neighborhood versions, node histories, and graph snapshots.", "TGI is designed to use a distributed key-value store to store the partitioned deltas, and can thus leverage the scalability afforded by those systems (our implementation uses Apache Cassandra 1 key-value store).", "TGI is a tunable index structure, and we investigate the impact of tuning the different parameters through an extensive empirical evaluation." ]
[ "The second component of HGS is a Temporal Graph Analysis Framework (TAF), which provides an expressive library to specify a wide range of temporal graph analysis tasks and to execute them at scale in a cluster environment.", "The library is based on a novel set of temporal graph operators that enable a user to analyze the history of a graph in a variety of manners.", "The execution engine itself is based on Apache Spark #OTHEREFR , a large-scale in-memory cluster computing framework.", "Outline: The rest of the paper is organized as follows.", "In Section 2, we survey the related work on graph data stores, temporal indexing, and other topics relevant to the scope of the paper." ]
[ "DeltaGraph" ]
background
{ "title": "Storing and Analyzing Historical Graph Data at Scale", "abstract": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system's efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data." }
{ "title": "Efficient snapshot retrieval over historical graph data", "abstract": "We present a distributed graph database system to manage historical data for large evolving information networks, with the goal to enable temporal and evolutionary queries and analysis. The cornerstone of our system is a novel, userextensible, highly tunable, and distributed hierarchical index structure called DeltaGraph, that enables compact recording of the historical network information, and that supports efficient retrieval of historical graph snapshots for single-site or parallel processing. Our system exposes a general programmatic API to process and analyze the retrieved snapshots. Along with the original graph data, DeltaGraph can also maintain and index auxiliary information; this functionality can be used to extend the structure to efficiently execute queries like subgraph pattern matching over historical data. We develop analytical models for both the storage space needed and the snapshot retrieval times to aid in choosing the right construction parameters for a specific scenario. We also present an in-memory graph data structure called GraphPool that can maintain hundreds of historical graph instances in main memory in a non-redundant manner. We present a comprehensive experimental evaluation that illustrates the effectiveness of our proposed techniques at managing historical graph information. ACKNOWLEDGMENT" }
1409.0798
1207.5777
Version-First Representation
Further, we plan to consider adding what we call Steiner datasets to optimize retrievalanalogous to Steiner points, these "extra" datasets can be used to reduce the retrieval costs as demonstrated in our work on historical graph data management #REFR .
[ "Challenge 2: Version Graph Encoding: The problem of efficiently encoding a graph of versions is also challenging.", "Just because two versions are adjacent in the version graph doesn't mean that they should be stored as differences against each other.", "For example, for the version graph depicted in Figure 3 , depending on the exact changes made, the difference between T3 and T4 may be smaller than the difference between T3 and T2.", "There is also a trade-off between materializing a specific version or just storing a delta from a past version, since following delta chains may be expensive.", "We plan to employ optimization techniques to find the optimal encoding of the version graph by considering all possible pairwise encodings." ]
[ "Effective heuristics will be needed because this search space is very large." ]
[ "historical graph data" ]
background
{ "title": "DataHub: Collaborative Data Science&Dataset Version Management at Scale", "abstract": "Relational databases have limited support for data collaboration, where teams collaboratively curate and analyze large datasets. Inspired by software version control systems like git, we propose (a) a dataset version control system, giving users the ability to create, branch, merge, difference and search large, divergent collections of datasets, and (b) a platform, DATAHUB, that gives users the ability to perform collaborative data analysis building on this version control system. We outline the challenges in providing dataset version control at scale." }
{ "title": "Efficient snapshot retrieval over historical graph data", "abstract": "We present a distributed graph database system to manage historical data for large evolving information networks, with the goal to enable temporal and evolutionary queries and analysis. The cornerstone of our system is a novel, userextensible, highly tunable, and distributed hierarchical index structure called DeltaGraph, that enables compact recording of the historical network information, and that supports efficient retrieval of historical graph snapshots for single-site or parallel processing. Our system exposes a general programmatic API to process and analyze the retrieved snapshots. Along with the original graph data, DeltaGraph can also maintain and index auxiliary information; this functionality can be used to extend the structure to efficiently execute queries like subgraph pattern matching over historical data. We develop analytical models for both the storage space needed and the snapshot retrieval times to aid in choosing the right construction parameters for a specific scenario. We also present an in-memory graph data structure called GraphPool that can maintain hundreds of historical graph instances in main memory in a non-redundant manner. We present a comprehensive experimental evaluation that illustrates the effectiveness of our proposed techniques at managing historical graph information. ACKNOWLEDGMENT" }
1803.11432
1206.1219
Preliminaries.
Related results can be found in #REFR where the regularity (Lipschitzianity in the spatial variable, Hölder continuity in time) for the corresponding two-player impulse control game with an uncontrolled process.
[ "p the value of the game exists and is given by:", "Remark 4.2. By definition of the value functions we automatically have", "for all x ∈ [t, τ S ] × R p .", "To prove Theorem 4.1, it therefore remains to establish the reverse inequality of (4.4).", "We now prove the regularity of the value functions associated to the game." ]
[ "Lemma 4.3.", "The functions V − and V + may be equivalently written as:", "whereŪ (t,τS) is the set of player I admissible controls which have no impulses at time s = t and correspondingly,Ā (t,τS) (resp.,B (t,τS) is the set of all player I (resp., player II) non-anticipative strategies with controls drawn from the setŪ (t,τS) (resp.,T \\t)." ]
[ "corresponding two-player impulse" ]
background
{ "title": "A Viscosity Approach to Stochastic Differential Games of Control and Stopping Involving Impulsive Control", "abstract": "Abstract. This paper analyses a stochastic differential game of control and stopping in which one of the players modifies a diffusion process using impulse controls, an adversary then chooses a stopping time to end the game. The paper firstly establishes the regularity and boundedness of the upper and lower value functions from which an appropriate variant of the dynamic programming principle (DPP) is derived. It is then proven that the upper and lower value functions coincide so that the game admits a value and that the value of the game is a unique viscosity solution to a HJBI equation described by a double obstacle quasi-integro-variational inequality." }
{ "title": "Stochastic differential games involving impulse controls and double-obstacle quasi-variational inequalities", "abstract": "We study a two-player zero-sum stochastic differential game with both players adopting impulse controls, on a finite time horizon. The Hamilton-JacobiBellman-Isaacs (HJBI) partial differential equation of the game turns out to be a double-obstacle quasi-variational inequality, therefore the two obstacles are implicitly given. We prove that the upper and lower value functions coincide, indeed we show, by means of the dynamic programming principle for the stochastic differential game, that they are the unique viscosity solution to the HJBI equation, therefore proving that the game admits a value." }
2001.03659
1703.06953
INTRODUCTION
This approach was improved in #REFR by adding upsampling layers and making the network fully convolutional.
[ "Recently there has been a large number of applications of convolutional neural networks (ConvNets) to neural style transfer.", "VGG16 #OTHEREFR ) was used to extract features from both content and style images #OTHEREFR to transfer style onto a randomly created image or the content image." ]
[ "A number of generative adversarial networks, GANs #OTHEREFR were developed and successfully applied to the neural style transfer for images and videos, such as CycleGANs , Pix2pix , pose-guided GANs #OTHEREFR .", "Font neural style transfer is an area of neural style transfer that is concerned with the transfer and generation of font styles.", "In #OTHEREFR GAN was developed that synthesizes unseen glyphs (characters) given the previously observed ones in a particular decorative style.", "In (Yang et al., 2019) GANs are trained to transfer style (fire, water, smoke) to glyphs to create an artistic representation.", "GlyphGAN #OTHEREFR was recently developed for generation of glyphs in a required style." ]
[ "network", "upsampling layers" ]
method
{ "title": "Network of Steel: Neural Font Style Transfer from Heavy Metal to Corporate Logos", "abstract": "We introduce a method for transferring style from the logos of heavy metal bands onto corporate logos using a VGG16 network. We establish the contribution of different layers and loss coefficients to the learning of style, minimization of artefacts and maintenance of readability of corporate logos. We find layers and loss coefficients that produce a good tradeoff between heavy metal style and corporate logo readability. This is the first step both towards sparse font style transfer and corporate logo decoration using generative networks. Heavy metal and corporate logos are very different artistically, in the way they emphasize emotions and readability, therefore training a model to fuse the two is an interesting problem." }
{ "title": "Multi-style Generative Network for Real-time Transfer", "abstract": "Recent work in style transfer learns a feed-forward generative network to approximate the prior optimizationbased approaches, resulting in real-time performance. However, these methods require training separate networks for different target styles which greatly limits the scalability. We introduce a Multi-style Generative Network (MSGNet) with a novel Inspiration Layer, which retains the functionality of optimization-based approaches and has the fast speed of feed-forward networks. The proposed Inspiration Layer explicitly matches the feature statistics with the target styles at run time, which dramatically improves versatility of existing generative network, so that multiple styles can be realized within one network. The proposed MSG-Net matches image styles at multiple scales and puts the computational burden into the training. The learned generator is a compact feed-forward network that runs in real-time after training. Comparing to previous work, the proposed network can achieve fast style transfer with at least comparable quality using a single network. The experimental results have covered (but are not limited to) simultaneous training of twenty different styles in a single network. The complete software system and pre-trained models will be publicly available upon publication 1 ." }
2001.03659
1703.06953
MSG Net
MSG Net was introduced in #REFR .We finetuned it to our data that we scraped from the internet: 19 corporate logos (content) and 11 heavy metal logos (style).
[]
[ "Style loss hyperparameter was set to 10000, content loss hyperparameter to 1, learning rate to 1.0.", "Although MSG Net is more advanced than plain VGG16: it has a fully convolutional architecture, learns weights to evolve an image with the transferred style and has more loss functions, it performs worse than Network of Steel in terms of sparse style transfer, as it does not transfer any font style from heavy metal logos onto the font in the corporate logos at all.", "MSG-Net manages to evolve some small elements around the glyphs, that are barely noticeable." ]
[ "logos", "style" ]
method
{ "title": "Network of Steel: Neural Font Style Transfer from Heavy Metal to Corporate Logos", "abstract": "We introduce a method for transferring style from the logos of heavy metal bands onto corporate logos using a VGG16 network. We establish the contribution of different layers and loss coefficients to the learning of style, minimization of artefacts and maintenance of readability of corporate logos. We find layers and loss coefficients that produce a good tradeoff between heavy metal style and corporate logo readability. This is the first step both towards sparse font style transfer and corporate logo decoration using generative networks. Heavy metal and corporate logos are very different artistically, in the way they emphasize emotions and readability, therefore training a model to fuse the two is an interesting problem." }
{ "title": "Multi-style Generative Network for Real-time Transfer", "abstract": "Recent work in style transfer learns a feed-forward generative network to approximate the prior optimizationbased approaches, resulting in real-time performance. However, these methods require training separate networks for different target styles which greatly limits the scalability. We introduce a Multi-style Generative Network (MSGNet) with a novel Inspiration Layer, which retains the functionality of optimization-based approaches and has the fast speed of feed-forward networks. The proposed Inspiration Layer explicitly matches the feature statistics with the target styles at run time, which dramatically improves versatility of existing generative network, so that multiple styles can be realized within one network. The proposed MSG-Net matches image styles at multiple scales and puts the computational burden into the training. The learned generator is a compact feed-forward network that runs in real-time after training. Comparing to previous work, the proposed network can achieve fast style transfer with at least comparable quality using a single network. The experimental results have covered (but are not limited to) simultaneous training of twenty different styles in a single network. The complete software system and pre-trained models will be publicly available upon publication 1 ." }
1802.07101
1703.06953
Related Work
Compared with #REFR , our work has the benefits of flexible continuous and spatial stroke size control.
[ "Controlling #OTHEREFR .", "Another related work is Wang et al.'s algorithm which aims to learn large brush strokes for high-resolution images #OTHEREFR .", "They find that current Fast Style Transfer algorithms fail to paint large strokes in high-resolution images and propose a coarse-to-fine architecture to solve this problem.", "Note that the work in #OTHEREFR is intrinsically different from this paper as one single pre-trained model in #OTHEREFR still produces one stroke size for the same input image.", "A concurrent work in #OTHEREFR also explores the issue of stroke size control." ]
[ "Regulating receptive field in neural networks.", "The receptive field is one of the basic concepts in convolutional neural networks, which refers to a region of the input image that one neuron is responsive to.", "It can affect the performance of the networks and becomes a critical issue in many tasks (e.g., semantic segmentation #OTHEREFR , image parsing).", "To regulate the receptive field, #OTHEREFR proposes the operation of dilated convolution (also called atrous convolution in #OTHEREFR ), which supports the expansion of receptive field by setting different dilation values and is widely used in many generation tasks like #OTHEREFR .", "Another work in #OTHEREFR further proposes a deformable convolution which augments the sampling locations in regular convolution with additional offsets. Furthermore, Wei et al." ]
[ "spatial stroke size" ]
result
{ "title": "Stroke Controllable Fast Style Transfer with Adaptive Receptive Fields", "abstract": "Abstract. The Fast Style Transfer methods have been recently proposed to transfer a photograph to an artistic style in real-time. This task involves controlling the stroke size in the stylized results, which remains an open challenge. In this paper, we present a stroke controllable style transfer network that can achieve continuous and spatial stroke size control. By analyzing the factors that influence the stroke size, we propose to explicitly account for the receptive field and the style image scales. We propose a StrokePyramid module to endow the network with adaptive receptive fields, and two training strategies to achieve faster convergence and augment new stroke sizes upon a trained model respectively. By combining the proposed runtime control strategies, our network can achieve continuous changes in stroke sizes and produce distinct stroke sizes in different spatial regions within the same output image." }
{ "title": "Multi-style Generative Network for Real-time Transfer", "abstract": "Recent work in style transfer learns a feed-forward generative network to approximate the prior optimizationbased approaches, resulting in real-time performance. However, these methods require training separate networks for different target styles which greatly limits the scalability. We introduce a Multi-style Generative Network (MSGNet) with a novel Inspiration Layer, which retains the functionality of optimization-based approaches and has the fast speed of feed-forward networks. The proposed Inspiration Layer explicitly matches the feature statistics with the target styles at run time, which dramatically improves versatility of existing generative network, so that multiple styles can be realized within one network. The proposed MSG-Net matches image styles at multiple scales and puts the computational burden into the training. The learned generator is a compact feed-forward network that runs in real-time after training. Comparing to previous work, the proposed network can achieve fast style transfer with at least comparable quality using a single network. The experimental results have covered (but are not limited to) simultaneous training of twenty different styles in a single network. The complete software system and pre-trained models will be publicly available upon publication 1 ." }
1905.10099
1903.03784
Introduction
For instance, the closest thing to a map one can obtain from using several SW univariate projections is an average of several permutations, which is not a map but a transport plan #REFR [20, p.6].
[ "For these reasons, more robust or more computationally efficient approaches based on lower-dimensional projections of OT have recently been developed.", "In particular, sliced Wasserstein (SW) distances #OTHEREFR leverage the simplicity of OT between one-dimensional measures to define distances and barycentres by averaging the optimal transport between projections onto several random directions.", "This approach has been applied to alleviate training complexity in the GAN/VAE literature #OTHEREFR .", "Very recently, [19] consider projections on k-dimensional subspaces that are adversarially selected in a min-max/max-min framework.", "However, all of these approaches only carry out one-half of the goal of OT: by design, they can only obtain maps in subspaces that are optimal (or nearly so) between the projected measures, but not transportation maps in the entire space in which the original measures live." ]
[ "Our approach.", "Whereas the approaches cited above focus on OT maps and plans in the projection subspace only, we consider plans and maps on the entire space that are constrained to be optimal when projected on E.", "This results in the definition of a class of transportation plans that figuratively need to make a \"detour\" in E.", "We propose two constructions to recover such maps, that can naturally be obtained respectively as the limit of discrete sampling on the one hand, and as the optimal conditioned map on the other hand.", "Paper Structure." ]
[ "projections" ]
background
{ "title": "Subspace Detours: Building Transport Plans that are Optimal on Subspace Projections", "abstract": "Sliced Wasserstein metrics between probability measures solve the optimal transport (OT) problem on univariate projections, and average such maps across projections. The recent interest for the SW distance shows that much can be gained by looking at optimal maps between measures in smaller subspaces, as opposed to the curse-of-dimensionality price one has to pay in higher dimensions. Any transport estimated in a subspace remains, however, an object that can only be used in that subspace. We propose in this work two methods to extrapolate, from an transport map that is optimal on a subspace, one that is nearly optimal in the entire space. We prove that the best optimal transport plan that takes such \"subspace detours\" is a generalization of the Knothe-Rosenblatt transport. We show that these plans can be explicitly formulated when comparing Gaussians measures (between which the Wasserstein distance is usually referred to as the Bures or Fréchet distance). Building from there, we provide an algorithm to select optimal subspaces given pairs of Gaussian measures, and study scenarios in which that mediating subspace can be selected using prior information. We consider applications to NLP and evaluation of image quality (FID scores)." }
{ "title": "Orthogonal Estimation of Wasserstein Distances", "abstract": "Wasserstein distances are increasingly used in a wide variety of applications in machine learning. Sliced Wasserstein distances form an important subclass which may be estimated efficiently through one-dimensional sorting operations. In this paper, we propose a new variant of sliced Wasserstein distance, study the use of orthogonal coupling in Monte Carlo estimation of Wasserstein distances and draw connections with stratified sampling, and evaluate our approaches experimentally in a range of large-scale experiments in generative modelling and reinforcement learning." }
1506.02455
cs/0112012
Uniform generation of finite traces, 2: evaluating an average cost.
The terms λ M (k), coefficients of the growth series G(X) = 1/µ M (X) , are asymptotically equivalent to Cp −k 0 for some constants C > 0 if M is irreducible #REFR . The result in Lemma 3 follows.
[ "In turn, this directly depends on the numbers θ k (x) = #{y ∈ M k : y ≤ x} of terms in the sum (7) defining φ(x).", "The numbers θ k (x) might be arbitrary large; for", "However we have the following result.", "• Lemma 3-Assume that M is irreducible. Then, there exists C > 0 such that:", "To see this, apply #OTHEREFR to the constant function φ = 1 on M k , whose associated function is φ = θ k on M (k) , to obtain:" ]
[ "Applying usual techniques #OTHEREFR to specifically retrieve all traces y ≤ x of length k = τ (x) is feasible in time O(k) in average and allows to compute φ(x), and consequently to estimate the expectation E ν φ(C 1 · . . .", "· C k ) via Markov chain sampling and a Monte-Carlo algorithm.", "By (9), applying the same estimation technique to the function φ = 1 yields an estimate for the normalization factor p k 0 · λ M (k) .", "In passing, this also yields a Monte-Carlo estimate for the number λ M (k).", "All together, we are thus able to estimate with an arbitrary precision both terms in the right hand member of (8), hence yielding an accurate estimation of E νM k φ ." ]
[ "Lemma", "growth series" ]
background
{ "title": "Uniform generation in trace monoids", "abstract": "We consider the problem of random uniform generation of traces (the elements of a free partially commutative monoid) in light of the uniform measure on the boundary at infinity of the associated monoid. We obtain a product decomposition of the uniform measure at infinity if the trace monoid has several irreducible components-a case where other notions such as Parry measures, are not defined. Random generation algorithms are then examined." }
{ "title": "Computing the average parallelism in trace monoids", "abstract": "The height of a trace is the height of the corresponding heap of pieces in Viennot's representation, or equivalently the number of factors in its Cartier-Foata decomposition. Let h(t) and |t| stand respectively for the height and the length of a trace t. We prove that the bivariate commutative series t x h(t) y |t| is rational, and we give a finite representation of it. We use the rationality to obtain precise information on the asymptotics of the number of traces of a given height or length. Then, we study the average height of a trace for various probability distributions on traces. For the uniform probability distribution on traces of the same length (resp. of the same height), the asymptotic average height (resp. length) exists and is an algebraic number. To illustrate our results and methods, we consider a couple of examples: the free commutative monoid and the trace monoid whose independence graph is the ladder graph." }
2003.03762
cs/0112012
Digraphs of states-and-cliques and its augmented version ( DSC and
The digraph of states-and-cliques (DSC) is the analogous of the digraph of cliques for trace monoids; the extension to the ADSC in the following definition mimics the analogous introduced for trace monoids in #REFR .
[ "ADSC) -For combinatorics purposes, and for instance for counting the executions of a concurrent system, the multigraph of states is of little help.", "Indeed, two different paths in the multigraph of states, of the form (α · a) · b and (α · b) · a with ab = ba, count for only one execution.", "Instead, one must rely on the normal form of traces, and thus of executions.", "For this purpose, we introduce two digraphs related to the normal form of traces, adapted to the framework of concurrent systems." ]
[ "The augmented digraph of states-and-cliques (ADSC) of S has a node for each triple of the form (α, c, i), where (α, c) is a node of the DSC, and i = 1, . . . , |c|.", "There is an edge from (α, c, i) to (β, d, j) if:", "1.", "(α, c) = (β, d) and j = i + 1; or 2.", "i = |c| and j = 1 and there is an edge from (α, c) to (β, d) in the DSC." ]
[ "trace monoids" ]
background
{ "title": "A spectral property for concurrent systems and some probabilistic applications", "abstract": "We study trace theoretic concurrent systems. We introduce a notion of irreducible concurrent system and we prove the equivalence, for an accessible concurrent system, between irreducibility and a spectral property. The spectral property states a strict inequality between certain spectral radii of convergence of growth series. The proof that we present relies on Analytic combinatorics techniques. The spectral property is the cornerstone of our theory, in a framework where the Perron-Frobenius theory does not apply directly. We apply the spectral property to the probabilistic theory of concurrent systems. We prove on the one hand that the Markov chain of states-and-cliques associated with the uniform measure stays within the so-called positive nodes. On the other hand, we prove the uniqueness of the uniform measure. trace monoid, to the uniform Bernoulli measure on infinite sequences of letters [3] . This uniform measure is determined by a single scalar value, given by the unique root of smallest modulus of the Möbius polynomial. Hence, briefly speaking, the probabilistic theory of trace monoids corresponds to a theory of i.i.d. discrete random variables with concurrency. From there, a natural extension to look for is a theory of \"discrete Markov chains with concurrency\", which is precisely the topic of probabilistic concurrent systems. Different research fields may have an interest in a theory of probabilistic concurrent systems, and for different reasons. The probabilistic model checking of concurrent systems lacks such a theory. Instead, computer scientists rely on sequential probabilistic models including nondeterminism to take into account concurrency [4, 12] , which are not well suited for all situations. Hence a trace theory of probabilistic concurrent systems fills a gap in discrete events systems theory with applications in probabilistic simulation and in probabilistic model checking for real life models. The detailed study of the uniform measure is of particular interest in this respect. Discrete time Markov chains are one of the most successful models of probability theory, with applications as versatile as they are powerful. A notion of \"Markov chain with several tokens moving concurrently\" is therefore of interest also for probability theorists. The graphical model of Petri nets provides such models, which fit into the framework of this paper, and provides non trivial new probabilistic systems. Beside the expected applications in discrete event systems theory, an example that we present in Section 3.4 of the paper shows an interplay between concurrent systems and topics from Combinatorics. We use the tilings of the Aztec diamond of order 2 to define a simple, yet non trivial concurrent system, for which we obtain the probabilistic parameters at the end of the paper; we only scratch the surface of a possible interplay between tiling models and probabilistic concurrent systems theory. Description of the results. The digraph of cliques, which is effective for studying the combinatorics of a trace monoid, has to be replaced for a concurrent system by the digraph of states-and-cliques (DSC), which was introduced in [1]. Executions of the system correspond bijectively to paths in the DSC. The main difficulty comes from the following fact: even if the concurrent system is irreducible, the DSC is not strongly connected in general. This contrasts with trace monoids theory, where a fundamental result is the strong connectedness and aperiodicity of the digraph of cliques of an irreducible trace monoid. And indeed, simple examples described in Section 3.4 already give raise to digraphs of states-and-cliques with several strongly connected components. What is interesting however is to understand the status of these several components, and to prove that they do not play the same role with respect to the combinatorics of the system. Our strategy is to prove a spectral property for irreducible concurrent systems. Intuitively speaking, the spectral property of an irreducible concurrent system, resulting from the action of a trace monoid M(Σ, I) on a set of states X, says the following: for any action a ∈ Σ, the new concurrent system obtained by restriction after removing the action a is \"really smaller\" than the original concurrent system. The latter notion is rigorously formalized through the notion of spectral radius, or dually of radius of convergence of certain growth series. This spectral property is, of course, also true for trace monoids. One way to prove it for trace monoids is to use the Perron-Frobenius theory for primitive matrices-we give in the paper yet another proof, thereby sustaining the expectation of a proof for concurrent systems that would not be a direct application of the Perron-Frobenius theory. But the straightforward application of the Perron-Frobenius theory is unavailable for concurrent systems since their DSC is not strongly connected in general." }
{ "title": "Computing the average parallelism in trace monoids", "abstract": "The height of a trace is the height of the corresponding heap of pieces in Viennot's representation, or equivalently the number of factors in its Cartier-Foata decomposition. Let h(t) and |t| stand respectively for the height and the length of a trace t. We prove that the bivariate commutative series t x h(t) y |t| is rational, and we give a finite representation of it. We use the rationality to obtain precise information on the asymptotics of the number of traces of a given height or length. Then, we study the average height of a trace for various probability distributions on traces. For the uniform probability distribution on traces of the same length (resp. of the same height), the asymptotic average height (resp. length) exists and is an algebraic number. To illustrate our results and methods, we consider a couple of examples: the free commutative monoid and the trace monoid whose independence graph is the ladder graph." }
1911.05911
1906.04661
Fast Algorithms for High-Dimensional Robust Estimation
Intriguingly, #REFR also provided evidence that the runtime of their algorithm may be best possible with current algorithmic techniques.
[ "At a high-level, a conceptual commonality of these works is that they leverage techniques from continuous optimization to develop iterative methods (with each iteration taking near-linear time) that are able to deal with multiple directions in parallel.", "In particular, the total number of iterations in each of these methods is at most poly-logarithmic in d/ǫ.", "Beyond robust mean estimation, the work #OTHEREFR recently studied the problem of robust covariance estimation with a focus on designing faster algorithms.", "By building on the techniques of #OTHEREFR , they obtained an algorithm for this problem with runtimeÕ(d 3.26 ).", "Rather curiously, this runtime is not linear in the input size, but nearly matches the (best known) runtime of the corresponding non-robust estimator (i.e., computing the empirical covariance)." ]
[]
[ "current algorithmic techniques" ]
background
{ "title": "Recent Advances in Algorithmic High-Dimensional Robust Statistics", "abstract": "Learning in the presence of outliers is a fundamental problem in statistics. Until recently, all known efficient unsupervised learning algorithms were very sensitive to outliers in high dimensions. In particular, even for the task of robust mean estimation under natural distributional assumptions, no efficient algorithm was known. Recent work in theoretical computer science gave the first efficient robust estimators for a number of fundamental statistical tasks, including mean and covariance estimation. Since then, there has been a flurry of research activity on algorithmic high-dimensional robust estimation in a range of settings. In this survey article, we introduce the core ideas and algorithmic techniques in the emerging area of algorithmic high-dimensional robust statistics with a focus on robust mean estimation. We also provide an overview of the approaches that have led to computationally efficient robust estimators for a range of broader statistical tasks and discuss new directions and opportunities for future work. * This article is an expanded version of an invited chapter entitled \"" }
{ "title": "Faster Algorithms for High-Dimensional Robust Covariance Estimation", "abstract": "We study the problem of estimating the covariance matrix of a high-dimensional distribution when a small constant fraction of the samples can be arbitrarily corrupted. Recent work gave the first polynomial time algorithms for this problem with near-optimal error guarantees for several natural structured distributions. Our main contribution is to develop faster algorithms for this problem whose running time nearly matches that of computing the empirical covariance. Given N = Ω(d 2 /ǫ 2 ) samples from a d-dimensional Gaussian distribution, an ǫ-fraction of which may be arbitrarily corrupted, our algorithm runs in time O(d 3.26 )/ poly(ǫ) and approximates the unknown covariance matrix to optimal error up to a logarithmic factor. Previous robust algorithms with comparable error guarantees all have runtimes Ω(d 2ω ) when ǫ = Ω(1), where ω is the exponent of matrix multiplication. We also provide evidence that improving the running time of our algorithm may require new algorithmic techniques." }
1109.3240
0705.4485
ACTIVE LEARNING
One natural approach is to explore the node v with the largest mutual information (MI) between its label t(v) and the labels t(G \ v) of the other nodes according to the Gibbs distribution #REFR .
[ "In the active learning setting, the algorithm can learn the class label of any given node, but at a cost-say, by devoting resources in the laboratory or the field.", "Since these resources are limited, it has to decide which node to explore.", "Its goal is to explore a small set of nodes and use their labels to guess the labels of the remaining nodes." ]
[ "We can write this as the difference between the entropy of t(G\\v) and its conditional entropy given t(v),", "Here H(G\\v | v) is the entropy, averaged over t(v) according to the marginal of t(v) in the Gibbs distribution, of the joint distribution of t(G \\ v) conditioned on t(v).", "In other words, MI(v) is the expected amount of information we will gain about t(G \\ v), or equivalently the expected decrease in the entropy, that will result from learning t(v).", "Since the mutual information is symmetric, we also have", "where" ]
[ "nodes", "largest mutual information" ]
method
{ "title": "Active learning for node classification in assortative and disassortative networks", "abstract": "In many real-world networks, nodes have class labels, attributes, or variables that affect the network's topology. If the topology of the network is known but the labels of the nodes are hidden, we would like to select a small subset of nodes such that, if we knew their labels, we could accurately predict the labels of all the other nodes. We develop an active learning algorithm for this problem which uses information-theoretic techniques to choose which nodes to explore. We test our algorithm on networks from three different domains: a social network, a network of English words that appear adjacently in a novel, and a marine food web. Our algorithm makes no initial assumptions about how the groups connect, and performs well even when faced with quite general types of network structure. In particular, we do not assume that nodes of the same class are more likely to be connected to each other-only that they connect to the rest of the network in similar ways." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
0909.4331
0705.4485
3
One such model is that of , which extends the mixed-membership stochastic blockmodel #REFR ] to generate node attributes.
[ "(a) Draw binary link indicator 2.2. Link probability function.", "The function ψ is the link probability function that defines a distribution over the link between two documents.", "This function is dependent on the two vectors of topic assignments that generated their words,", "This modeling decision is important.", "A natural alternative is to model links as a function of the topic proportions vectors θ d and θ d ′ ." ]
[ "Similar in spirit is the nongenerative model of #OTHEREFR which \"regularizes\" topic models with graph information.", "The issue with these formulations is that the links and words of a single document are possibly explained by disparate sets of topics, thereby hindering their ability to make predictions about words from links and vice versa.", "In enforcing that the link probability function depends on the latent topic assignments z d and z d ′ , we enforce that the specific topics used to generate the links are those used to generate the words.", "A similar mechanism is employed in #OTHEREFR for nonpair-wise response variables.", "In estimating parameters, this means that the same topic indices describe both patterns of recurring words and patterns in the links." ]
[ "mixed-membership stochastic blockmodel" ]
background
{ "title": "Hierarchical relational models for document networks", "abstract": "We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, that is, discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and estimation algorithms based on variational methods that take advantage of sparsity and scale with the number of links. We evaluate the predictive performance of the RTM for large networks of scientific abstracts, web documents, and geographically tagged news." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1211.0028
0705.4485
Related Work and Conclusion
Instead, we have taken a probabilistic perspective, borrowing from the MMSB model #REFR to cast links into the same latent topic space as the text.
[ "The literature contains other topic models that combine several data modalities; ours is distinguished by the assumptions it makes.", "In particular, existing topic models of text and network data either treat the network as an outcome of the text topics (RTM #OTHEREFR ), or define new topics for each link in the network (ART #OTHEREFR ). The Pairwise Link-LDA model of Nallapati et al.", "#OTHEREFR is the most similar to ours, except (1) it does not model labels, (2) it models asymmetric links only, and crucially, (3) its inference algorithm is infeasible for even P = 40, 000 users (the size of our traininḡ S(C)'s) because it models all O(P 2 ) positive and zero links.", "Our model escapes this complexity trap by only considering the positive links.", "We also note that past work on Facebook's data #OTHEREFR used the network implicitly, by summing features over neighboring users." ]
[ "Thus, links are neither a precursor to nor an outcome of the text, but equals, resulting in an intuitive scheme where both text and links derive from specific topics.", "The manner in which we model the labels is borrowed from sLDA #OTHEREFR , except that our links also influence the observed labels y.", "In conclusion, we have tackled salient questions about user interests and friendships on Facebook, by way of a system that combines text, network and label data to produce insightful visualizations of the social structure generated by millions of Facebook users.", "Our system's key component is a latent space model (SM 4 ) that learns the aggregate relationships between user text, friendships, and interests, and this allows us to study millions of users at a macroscopic level.", "The SM 4 model is closely related to the supervised text model of sLDA #OTHEREFR and the network model of MMSB #OTHEREFR , and combines features of both models to address our challenges." ]
[ "latent topic space" ]
method
{ "title": "Understanding the Interaction between Interests, Conversations and Friendships in Facebook", "abstract": "In this paper, we explore salient questions about user interests, conversations and friendships in the Facebook social network, using a novel latent space model that integrates several data types. A key challenge of studying Facebook's data is the wide range of data modalities such as text, network links, and categorical labels. Our latent space model seamlessly combines all three data modalities over millions of users, allowing us to study the interplay between user friendships, interests, and higher-order network-wide social trends on Facebook. The recovered insights not only answer our initial questions, but also reveal surprising facts about user interests in the context of Facebook's ecosystem. We also confirm that our results are significant with respect to evidential information from the study subjects." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1711.10421
0705.4485
Mixed Membership Stochastic Blockmodel
In this paper #REFR , the authors focus on directed networks and assume the observed network is generated according to node-specific distributions of community membership and edge-specific indicator vectors denoting the membership in one of the C communities.
[ "Many real-world networks are multi-faceted.", "However, stochastic blockmodels suffer from a limitation that each node can only belong to one group, or in other words, play a single latent role. To overcome this issue, Airoldi et al.", "#OTHEREFR relax the assumption of a single latent role for nodes and develop the mixed membership stochastic blockmodel." ]
[ "Each node is associated with a randomly drawn vector π i for node i, where π i,c denotes the probability of node i belonging to group c.", "That is, each node can simultaneously belong to multiple groups with different degrees of affiliation degree.", "The probabilities of edges between different groups are defined by the matrix of Bernoulli rates B C×C , where B(g, h) represents the probability of having an edge between a node from group g and imsart-generic ver.", "2014/10/16 file: dynnet-resubmit_arxiv.tex date: May 31, 2018 a node form group h.", "The mixed membership stochastic blockmodel posits that the {Y ij } 1<i,j<N are drawn from the following generative process." ]
[ "observed network" ]
background
{ "title": "A Review of Dynamic Network Models with Latent Variables", "abstract": "Abstract: We present a selective review of statistical modeling of dynamic networks. We focus on models with latent variables, specifically, the latent space models and the latent class models (or stochastic blockmodels), which investigate both the observed features and the unobserved structure of networks. We begin with an overview of the static models, and then we introduce the dynamic extensions. For each dynamic model, we also discuss its applications that have been studied in the literature, with the data source listed in Appendix. Based on the review, we summarize a list of open problems and challenges in dynamic network modeling with latent variables." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1612.00984
0705.4485
Class models.
In order to overcome this limitation, some authors #REFR extended classical class-based models to allow mixed membership.
[ "#OTHEREFR , studied and applied a non-parametric generalization of the model which allows for an infinite number of classes (therefore called infinite relational model ).", "It permits application on data where the information about class is not provided directly.", "They use a Gibbs sampling technique to infer model parameters.", "A well-known shortcoming of the class-based models is the proliferation of classes #OTHEREFR , since dividing a class according to a new feature leads to two different classes: if we have a class for \"students\" and then we wish to account for the gender too, we will have to split that class in \"female students\" and \"male students\".", "This approach is impractical and in many cases it leads to overlook significant dynamics." ]
[ "Here, the model of classes remains, but with a fuzzy approach: each node can be \"split\" among multiple classes, and in practice class assignments become represented by a probability distribution." ]
[ "classical class-based models" ]
method
{ "title": "Estimating latent feature-feature interactions in large feature-rich graphs", "abstract": "Complex networks arising in nature are usually modeled as (directed or undirected) graphs describing some connection between the objects that are identified with their nodes. In many real-world scenarios, though, those objects are endowed with properties and attributes (hereby called features). In this paper, we shall confine our interest to binary features, so that every node has a precise set of features; we assume that the presence/absence of a link between two given nodes depends on the features that the two nodes exhibit. Although the situation described above is truly ubiquitous, there is a limited body of research dealing with large graphs of this kind. Many previous works considered homophily as the only possible transmission mechanism translating node features into links: two nodes will be linked with a probability that depends on the number of features they share. Other authors, instead, developed more sophisticated models (often using Bayesian Networks [30] or Markov Chain Monte Carlo [20] ), that are indeed able to handle complex feature interactions, but are unfit to scale to very large networks. We study a model derived from the works of Miller et al. [47] , where interactions between pairs of features can foster or discourage link formation. In this work, we will investigate how to estimate the latent feature-feature interactions in this model. We shall propose two solutions: the first one assumes feature independence and it is essentially based on a Naive Bayes approach; the second one consists in using a learning algorithm, which relaxes the independence assumption and is based on perceptron-like techniques. In fact, we show it is possible to cast the model equation in order to see it as the prediction rule of a perceptron. We analyze how classical results for the perceptrons can be interpreted in this context; then, we define a fast and simple perceptron-like algorithm for this task. This approach (that we call Llama, Learning LAtent featurefeature MAtrix) can process hundreds of millions of links in minutes. Our experiments show that our approach can be applied even to very large networks. We then compare these two techniques in two different ways. First we produce synthetic datasets, obtained by generating random graphs following the model we adopted. These experiments show how well the Llama algorithm can reconstruct latent variables in this model. These experiments also provide evidence that the Naive independence assumptions made by the first approach are detrimental in practice. Then we consider a real, large-scale citation network where each node (i.e., paper) can be described by different types of characteristics. This second set of experiments confirm that our algorithm can find meaningful latent feature-feature interactions. Furthermore, our framework can be used to assess how well each set of features can explain the links in the graph." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1302.2684
0705.4485
Introduction 1
In this paper, we provide a novel learning approach for learning these models and establish regimes where the communities can be recovered efficiently. The mixed membership community model of #REFR has a number of attractive properties.
[ "A popular class of probabilistic models, termed as the stochastic blockmodels, have been widely studied and enjoy strong theoretical learning guarantees, e.g.", "#OTHEREFR ).", "However, they posit that an individual belongs to a single community, which does not hold in most real settings #OTHEREFR .", "In this paper, we consider a class of mixed membership community models, originally introduced by #OTHEREFR , and recently employed by #OTHEREFR and #OTHEREFR .", "This model has been shown to be effective in many real-world settings, but so far, no learning approach exists with provable guarantees." ]
[ "It retains many of the convenient properties of the stochastic block model.", "For instance, conditional independence of the edges is assumed, given the community memberships of the nodes in the network.", "At the same time, it allows for communities to overlap, and for every individual to be fractionally involved in different communities.", "It includes the stochastic block model as a special case (corresponding to zero overlap among the different communities).", "This enables us to compare our learning guarantees with existing works for stochastic block models, and also study how the extent of overlap among different communities affects the learning performance." ]
[ "communities", "model" ]
background
{ "title": "A Tensor Spectral Approach to Learning Mixed Membership Community Models", "abstract": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008) . This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1302.2684
0705.4485
Summary of Results
Learning general mixed membership models: We present a unified approach for the mixed membership model of #REFR .
[ "We now summarize the main contributions of this paper.", "We propose a novel approach for learning mixed membership community models of #OTHEREFR .", "Our approach is a method-of-moments estimator and incorporates tensor spectral decomposition techniques.", "We provide guarantees for our approach under a set of sufficient conditions.", "Finally, we compare our results to existing ones for the special case of the stochastic block model, where nodes belong to a single community." ]
[ "The extent of overlap between different communities in this model class is controlled (roughly) through a single scalar parameter, termed as the Dirichlet concentration parameter α 0 := i α i , when the community membership vectors are drawn from the Dirichlet distribution Dir(α).", "When α 0 → 0, the mixed membership model degenerates to a stochastic block model.", "We propose a unified learning method for the class of mixed membership models.", "We provide explicit scaling requirements in terms of the extent of community overlaps (through α 0 ), the network size n, the number of communities k, and the average edge connectivity across various communities.", "For instance, for the special case, where p is the probability of an intra-community edge, and q corresponds to the probability of inter-community connectivity, when the average community sizes are equal, we require that 2" ]
[ "mixed membership model" ]
method
{ "title": "A Tensor Spectral Approach to Learning Mixed Membership Community Models", "abstract": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008) . This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1803.01616
0705.4485
I. INTRODUCTION
In particular, we focus on mixedmembership stochastic block models #REFR , in which nodes are allowed to belong to multiple groups simultaneously.
[ "Here, we introduce new generative models that are suitable to address the challenge above.", "We model all layers concurrently, so that our approach takes full advantage of the information contained in all layers to make predictions for any one of them.", "Our approach relies on the fact that having information on the interactions in different layers aids the inference * marc.tarres@urv.cat; These authors contributed equally to the work † antonia.godoy@urv.cat; These authors contributed equally to the work ‡ roger.guimera@urv.cat; Corresponding author § marta.sales@urv.cat process; in other words, that the interactions in layers different from the one we are interested in are informative about the interactions in the query layer.", "For instance, biologically similar cancer types are likely to show similar responses to the same drug pairs, and similar days of the week (for instance weekdays versus weekends) are also likely to display similar communication patterns for pairs of users.", "Our approach is based on recent results on probabilistic inference on stochastic block models, which has been successful at modeling the structure of complex networks #OTHEREFR and at predicting the behavior in biological #OTHEREFR and social #OTHEREFR systems." ]
[ "With these models it possible to model large complex networks with millions of links and, because they are more expressive than their fixed-membership counterparts, their predictive power is often superior #OTHEREFR .", "We propose two different mixedmembership multi-layer network models-a tensorial model that takes nodes as the basic unit to describe interactions in different layers, and a bipartite model that takes links (or pairs of nodes) as the basic unit.", "In our models, layers, as well as nodes or links, are grouped based on the similarities among the interaction patterns observed in them.", "This is in contrast to existing approaches, which do not take full advantage of the information that each layer carries about the structure of some other layers.", "We illustrate our models and inference approaches by analyzing two datasets-a network of drug interactions in different cancer types, and a temporal network of email communications #OTHEREFR ." ]
[ "mixedmembership stochastic block" ]
background
{ "title": "Tensorial and bipartite block models for link prediction in layered networks and temporal networks", "abstract": "Many real-world complex systems are well represented as multilayer networks; predicting interactions in those systems is one of the most pressing problems in predictive network science. To address this challenge, we introduce two stochastic block models for multilayer and temporal networks; one of them uses nodes as its fundamental unit, whereas the other focuses on links. We also develop scalable algorithms for inferring the parameters of these models. Because our models describe all layers simultaneously, our approach takes full advantage of the information contained in the whole network when making predictions about any particular layer. We illustrate the potential of our approach by analyzing two empirical datasets-a temporal network of email communications, and a network of drug interactions for treating different cancer types. We find that modeling all layers simultaneously does result, in general, in more accurate link prediction. However, the most predictive model depends on the dataset under consideration; whereas the node-based model is more appropriate for predicting drug interactions, the link-based model is more appropriate for predicting email communication." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1803.01616
0705.4485
II. TENSORIAL AND BIPARTITE MIXED-MEMBERSHIP BLOCK MODELS FOR LAYERED NETWORKS
To model such mixed group memberships #REFR , to each node i we assign a vector θ i ∈ R K , where θ iα ∈ [0, 1] denotes the probability that node i belongs to group α.
[ "We consider two types of generative models-one that takes individual nodes as its basic unit, and one that models links (or node pairs).", "The first generative model, based on individual nodes, is as follows.", "There are K groups of nodes and L groups of layers.", "We assume that the probability that a node in group α has an interaction of type r with a node in group β in a layer in group γ is p αβγ (r).", "Furthermore, we assume that both nodes and layers can belong to more than one group." ]
[ "Similarly, to each layer ℓ we assign a vector η ℓγ ∈ R L .", "These vectors are normalized so that α θ iα = γ η ℓγ = 1.", "The probability that link (i, j, ℓ) is of type r is then", "Note that if link types are exclusive (i.e.", "each edge can be of only one type), the probability tensor must satisfy the constraint r∈R p αβγ (r) = 1." ]
[ "node", "mixed group memberships" ]
method
{ "title": "Tensorial and bipartite block models for link prediction in layered networks and temporal networks", "abstract": "Many real-world complex systems are well represented as multilayer networks; predicting interactions in those systems is one of the most pressing problems in predictive network science. To address this challenge, we introduce two stochastic block models for multilayer and temporal networks; one of them uses nodes as its fundamental unit, whereas the other focuses on links. We also develop scalable algorithms for inferring the parameters of these models. Because our models describe all layers simultaneously, our approach takes full advantage of the information contained in the whole network when making predictions about any particular layer. We illustrate the potential of our approach by analyzing two empirical datasets-a temporal network of email communications, and a network of drug interactions for treating different cancer types. We find that modeling all layers simultaneously does result, in general, in more accurate link prediction. However, the most predictive model depends on the dataset under consideration; whereas the node-based model is more appropriate for predicting drug interactions, the link-based model is more appropriate for predicting email communication." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1107.3765
0705.4485
Scalability
For this experiment, the number of topics was set to K = 10, and inference was done with 137 mappers (the number of input sequence files) and 100 reducers #REFR .
[ "We report results on the TREC document collection (disks 4 and 5 #OTHEREFR ), consisting mostly of newswire documents from the Financial Times and LA Times.", "It contains more than 100, 000 distinct word types in approximately half a million documents.", "As a preprocessing step, we remove types that appear fewer than 20 times and apply stemming #OTHEREFR , reducing the vocabulary size to approximately 65, 000.", "This speeds inference and is consistent with standard approaches for LDA (but with a larger vocabulary than is typical).", "Figure 4 shows the relationship between training time and corpus size the training time averaged over the first 20 Map/Reduce iterations." ]
[ "Doubling the corpus size results in a less than 20% increase increase in running time, suggesting that Mr.", "LDA is able to successfully distribute the workload to more machines and take advantage of parallelism.", "As the number of input documents increases, the training time increases gracefully.", "The number of topics is another important factor affecting the training time (and hence the scalability) of the model.", "Figure 5 shows the average time for one iteration against different numbers of topics." ]
[ "topics", "inference" ]
method
{ "title": "Using Variational Inference and MapReduce to Scale Topic Modeling", "abstract": "Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for exploring document collections. Because of the increasing prevalence of large datasets, there is a need to improve the scalability of inference of LDA. In this paper, we propose a technique called MapReduce LDA (Mr. LDA) to accommodate very large corpus collections in the MapReduce framework. In contrast to other techniques to scale inference for LDA, which use Gibbs sampling, we use variational inference. Our solution efficiently distributes computation and is relatively simple to implement. More importantly, this variational implementation, unlike highly tuned and specialized implementations, is easily extensible. We demonstrate two extensions of the model possible with this scalable framework: informed priors to guide topic discovery and modeling topics from a multilingual corpus." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1708.07852
0705.4485
1.4.
DCMM is closely related to Mixed Membership Stochastic Block (MMSB) model by Airoldi et al. #REFR , but MMSB does not model degree heterogeneity.
[ "We propose Mixed-SCORE as a new approach to estimating network mixed memberships.", "The method contains four ingredients: (a) dimension reduction by PCA, (b) eigenvector normalization by SCORE, (c) a surprising connection between normalized eigenvectors and the Ideal Simplex, and (d) an easy-to-use Vertex Hunting algorithm that allows for a convenient reconstruction of the memberships.", "We analyze Mixed-SCORE carefully under the DCMM model.", "However, Mixed-SCORE is not tied to DCMM and may be successful in much broader settings.", "For example, for the data sets in Table 1 , where the DCMM only holds approximately (at most), yet the simplex structure is clearly visible, and Mixed-SCORE performs quite satisfactorily; see Section 2 for details." ]
[ "It is also different from the Latent Position Cluster (LPC) model by Handcock et al. #OTHEREFR .", "DCMM is similar to the Overlapping Continuous Community Assignment (OCCAM) model by Zhang et al.", "#OTHEREFR , but their models and interpretation on Π are very different.", "DCMM is closely related to Newman's DCBM #OTHEREFR , and is related to the recent literature on DCBM #OTHEREFR .", "However, these works are mostly focused on community detection, not on membership estimation." ]
[ "Mixed Membership Stochastic" ]
background
{ "title": "Estimating network memberships by simplex vertex hunting", "abstract": "Consider an undirected mixed membership network with $n$ nodes and $K$ communities. For each node $1 \\leq i \\leq n$, we model the membership by $\\pi_{i} = (\\pi_{i}(1), \\pi_{i}(2), \\ldots$, $\\pi_{i}(K))'$, where $\\pi_{i}(k)$ is the probability that node $i$ belongs to community $k$, $1 \\leq k \\leq K$. We call node $i$\"pure\"if $\\pi_i$ is degenerate and\"mixed\"otherwise. The primary interest is to estimate $\\pi_i$, $1 \\leq i \\leq n$. We model the adjacency matrix $A$ with a Degree Corrected Mixed Membership (DCMM) model. Let $\\hat{\\xi}_1, \\hat{\\xi}_2, \\ldots, \\hat{\\xi}_K$ be the first $K$ eigenvectors of $A$. We define a matrix $\\hat{R} \\in \\mathbb{R}^{n, K-1}$ by $\\hat{R}(i,k) = \\hat{\\xi}_{k+1}(i)/\\hat{\\xi}_1(i)$, $1 \\leq k \\leq K-1$, $1 \\leq i \\leq n$. The matrix can be viewed as a distorted version of its non-stochastic counterpart $R \\in \\mathbb{R}^{n, K-1}$, which is unknown but contains all information we need for the memberships. We reveal an interesting insight: There is a simplex ${\\cal S}$ in $\\mathbb{R}^{K-1}$ such that row $i$ of $R$ corresponds to a vertex of ${\\cal S}$ if node $i$ is pure, and corresponds to an interior point of ${\\cal S}$ otherwise. Vertex Hunting (i.e., estimating the vertices of ${\\cal S}$) is thus the key to our problem. The matrix $\\hat{R}$ is a row-wise normalization on the matrix of eigenvectors $\\hat{\\Xi}=[\\hat{\\xi}_1,\\ldots,\\hat{\\xi}_K]$, first proposed by Jin (2015). Alternatively, we may normalize $\\hat{\\Xi}$ by the row-wise $\\ell^q$-norms (e.g., Supplement of Jin (2015)), but it won't give rise to a simplex so is less convenient. We propose a new approach $\\textit{Mixed-SCORE}$ to estimating the memberships, at the heart of which is an easy-to-use Vertex Hunting algorithm. The approach is successfully applied to $4$ network data sets. We also derive the rate of convergence for Mixed-SCORE." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1906.10470
0705.4485
Community index s n
We use the mixed membership stochastic block model (MMSB) #REFR to model the social connection A(n, m) between agents n and m.
[ "We model the community index s n of agent n as", "where the mixture weights", "with α being a concentration hyperparameter and Dir (α) is the Dirichlet distribution." ]
[ "In this model, z n→m is the community whose belief agent n subscribes to due to the social influence from agent m.", "Under the influence of different agents, agent n may subscribe to the beliefs of different communities.", "If both agents n and m subscribe to the belief of the same community, they are more likely to be connected in the social network. We assume the following:", "where Be (g 0 , h 0 ) is the beta distribution with parameters g 0 , h 0 > 0, k = 1, . . . , K, and with being a small constant.", "Note that A is independent of π when z is given, as shown in Fig. 1 ." ]
[ "mixed membership stochastic" ]
method
{ "title": "An Unsupervised Bayesian Neural Network for Truth Discovery", "abstract": "Abstract-The problem of estimating event truths from conflicting agent opinions is investigated. An autoencoder learns the complex relationships between event truths, agent reliabilities and agent observations. A Bayesian network model is proposed to guide the learning of the autoencoder by modeling the dependence of agent reliabilities corresponding to different data samples. At the same time, it also models the social relationships between agents in the network. The proposed approach is unsupervised and is applicable when ground truth labels of events are unavailable. A variational inference method is used to jointly estimate the hidden variables in the Bayesian network and the parameters in the autoencoder. Simulations and experiments on real data suggest that the proposed method performs better than several other inference methods, including majority voting, the Bayesian Classifier Combination (BCC) method, the Community BCC method, and the recently proposed VISIT method." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1910.09370
0705.4485
Higher-Order Stochastic Block Model
This formulation represents an extension of the ∑ d ∏ i∈h θ id mixed-membership stochastic block model in #REFR , which was designed for networks with only pairwise interactions.
[ "For a given hypergraph, whether comprised of content or context nodes, the propensity of any combination of nodes to form a hyperedge is modeled as a product of two factors: the complementarity between the nodes in the combination and their cognitive availabilities.", "Combinations with higher propensity will be more likely to turn into papers and patents, agreeing with the intuition that people tend to search locally and pursue trending topics.", "To formulate this idea formally, each node is associated with a latent vector that positions the node i θ i in a latent space constructed to optimize the likelihood of observed papers and patents.", "Each entry of θ id the latent vector denotes the probability that node i belongs to a latent dimension d , and thus .", "∑ The complementarity between nodes in a combination h is modeled as the probability that those nodes belong to the same dimension, ." ]
[ "We also account for each node's cognitive availability because most empirical networks display great heterogeneity in node connectivity, with few contents intensively drawn upon and few contexts widely attended or appreciated across many papers and patents.", "Previous work ( 31 ) has shown that by integrating heterogeneity of node connectivity, the performance of community detection in real-world networks dramatically improves.", "Accordingly, we associate each node with a latent scalar to account for its i r i cognitive availability, presumably associated with its overall connectivity in the network.", "Assembling these components, the propensity ( ) of combination -our expectation of its appearance λ h h in actual papers and patents-is modeled as the product of the complementarity between the nodes in and their availabilities .", "To link the propensities to their observed appearances, we model the number of papers or patents that X h embody a certain combination as a Poisson random variable with the propensity of that combination as its mean: P oisson(λ ) X h~ h Accordingly, the probability of observing a hypergraph is the product of probabilities of observing all G possible combinations:" ]
[ "networks", "id mixed-membership stochastic" ]
method
{ "title": "Science and Technology Advance through Surprise", "abstract": "Breakthrough discoveries and inventions involve unexpected combinations of contents including problems, methods, and natural entities, and also diverse contexts such as journals, subfields, and conferences. Drawing on data from tens of millions of research papers, patents, and researchers, we construct models that predict more than 95% of next year's content and context combinations with embeddings constructed from high-dimensional stochastic block models, where the improbability of new combinations itself predicts up to half of the likelihood that they will gain outsized citations and major awards. Most of these breakthroughs occur when problems in one field are unexpectedly solved by researchers from a distant other. These findings demonstrate the critical role of surprise in advance, and enable evaluation of scientific institutions ranging from education and peer review to awards in supporting it. 19 th Century philosopher and scientist Charles Sanders Peirce argued that neither the logics of deduction nor induction alone could characterize the reasoning behind path-breaking new hypotheses in science, but rather their collision through a process he termed abduction. Abduction begins as expectations born of theory or tradition become disrupted by unexpected observations or findings ( 1 ) . Surprise stimulates scientists to forge new claims that make the surprising unsurprising. Here we empirically demonstrate across the biomedical sciences, physical sciences and patented inventions that, following Peirce, surprising hypotheses, findings and insights are the best available predictor of outsized success. But neither Peirce nor anyone since has specified where the stuff of new hypotheses came from. One account is serendipity or making the most of surprising encounters ( 2 , 3 ) , encapsulated in Pasteur's oft-quoted maxim \"chance favors only the prepared mind\" ( 4 ) , but this poses a paradox. The successful scientific mind must simultaneously know enough within a scientific or technological context to be surprised, and enough outside to imagine why it should not be surprised. Here we show how surprising successes systematically emerge across, rather than within researchers; most commonly when those in one field surprisingly publish problem-solving results to audiences in a distant other. This contrasts with research that focuses on inter-and multi-disciplinarity as sources of advance ( 5 -7 ) . We show how predictability and surprise in science and technology allow us new tools to evaluate how scientific institutions ranging from graduate education, peer review and awards facilitate advance. observing every possible combination (see SM for details)." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1104.3590
0705.4485
I. INTRODUCTION
The models used in most previous work are "mixed membership" models #REFR , in which, typically, vertices can belong to multiple groups and two vertices are more likely to be connected if they have more than one group in common.
[ "Our approach is to develop a solution to this general problem first, then show how a variant of the same approach can be applied to nonoverlapping communities as well.", "We tackle the detection of overlapping communities by fitting a stochastic generative model of network structure to observed network data.", "This approach, which applies methods of statistical inference to networks, has been explored by a number of authors for the nonoverlapping case, including some work that goes back several decades #OTHEREFR .", "Extending the same approach to the overlapping case, however, has proved nontrivial.", "The crucial step is to devise a generative model that produces networks with overlapping community structure similar to that seen in real networks." ]
[ "This, however, implies that the area of overlap between two communities should have a higher average density of edges than an area that falls in just a single community.", "It is unclear whether this reflects the behavior of real-world networks accurately, but it is certainly possible to construct networks that do not have this type of structure.", "Ideally we would prefer a less restrictive model that makes fewer assumptions about the structure of community overlaps.", "Another set of approaches to the detection of overlapping communities are those based on local community structure.", "Rather than splitting an entire network into communities in one step, these methods instead look for local groups within the network, based on analysis of local connection patterns and ignoring global network structure." ]
[ "\"mixed membership\" models" ]
method
{ "title": "An efficient and principled method for detecting communities in networks", "abstract": "A fundamental problem in the analysis of network data is the detection of network communities, groups of densely interconnected nodes, which may be overlapping or disjoint. Here we describe a method for finding overlapping communities based on a principled statistical approach using generative network models. We show how the method can be implemented using a fast, closedform expectation-maximization algorithm that allows us to analyze networks of millions of nodes in reasonable running times. We test the method both on real-world networks and on synthetic benchmarks and find that it gives results competitive with previous methods. We also show that the same approach can be used to extract nonoverlapping community divisions via a relaxation method, and demonstrate that the algorithm is competitively fast and accurate for the nonoverlapping problem." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1710.07231
0705.4485
II. RELATED WORK
While conventional SBM are defined for nonoverlapping community assignments, many overlapping or mixed variants have also been introduced #REFR .
[ "However, in such models, the graph is grown one node at a time, which makes them inherently serial and unscalable.", "Lately, there has been an emphasis on models which can be learnt from observed graphs.", "These include the Exponential Random Graph Models (ERGM) (also referred to as p* models) #OTHEREFR .", "ERGM essentially defines a log linear model over all possible graphs G, p(G|θ) ∝ exp(θ T s(G), where G is a graph, and s is a set of functions, that can be viewed as summary statistics for the structural features of the network.", "Another popular and well-known network models are Stochastic Block Models (SBM) #OTHEREFR in which each node belongs to a cluster and the relationships between nodes are determined by their cluster membership." ]
[ "Another relevant model is the Chung-Lu (CL) model #OTHEREFR in which the probability of an edge is proportional to the product of the degrees of its end vertices.", "While CL model effectively captures the degree distribution, it performs poorly for other properties such as clustering coefficient.", "A recent extension to CL model, called the Block Two-Level Erdős-Rényi (BTER) model #OTHEREFR is shown to match both the degree distribution and clustering coefficient on several graphs.", "However the BTER model is not truly generative as it only allows for creation of a synthetic graph which is exactly the same size as the observed graph.", "For generating arbitrary sized graphs one needs to provide parameters instead of learning them." ]
[ "nonoverlapping community assignments", "mixed variants" ]
background
{ "title": "Modeling Graphs Using a Mixture of Kronecker Models", "abstract": "Abstract-Generative models for graphs are increasingly becoming a popular tool for researchers to generate realistic approximations of graphs. While in the past, focus was on generating graphs which follow general laws, such as the power law for degree distribution, current models have the ability to learn from observed graphs and generate synthetic approximations. The primary emphasis of existing models has been to closely match different properties of a single observed graph. Such models, though stochastic, tend to generate samples which do not have significant variance in terms of the various graph properties. We argue that in many cases real graphs are sampled drawn from a graph population (e.g., networks sampled at various time points, social networks for individual schools, healthcare networks for different geographic regions, etc.). Such populations typically exhibit significant variance. However, existing models are not designed to model this variance, which could lead to issues such as overfitting. We propose a graph generative model that focuses on matching the properties of real graphs and the natural variance expected for the corresponding population. The proposed model adopts a mixture-model strategy to expand the expressiveness of Kronecker product based graph models (KPGM), while building upon the two strengths of KPGM, viz., ability to model several key properties of graphs and to scale to massive graph sizes using its elegant fractal growth based formulation. The proposed model, called x-Kronecker Product Graph Model, or xKPGM, allows scalable learning from observed graphs and generates samples that match the mean and variance of several salient graph properties. We experimentally demonstrate the capability of the proposed model to capture the inherent variability in real world graphs on a variety of publicly available graph data sets." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1610.06145
0705.4485
Empirical Timing Results.
For the true θ * , θ * kn for k = 0 are n evenly spaced samples over the interval of [0, 1]; θ * kn for k = 1 are n evenly spaced samples over the interval of #REFR 0] .
[ "We constructed 12 synthetic data sets in a full-factorial arrangement with M ∈ {20, 40, 60, 80}, K ∈ {2}, and N ∈ {4, 5, 6} and measured CPU time for each component of one iteration.", "For each arrangement, each element of the true x * is:", "Here N (0, 0.5 2 ) is the sample from a Normal distribution by its mean 0 and standard deviation 0.5." ]
[ "Table 1 shows that the time per iteration increases linearly with M when K and N are fixed.", "The time for solving all the relaxed dual problems increases as the number of samples increases.", "Even though the step of solving all the relaxed dual problems takes more than 90% of the total time per iteration when the number of samples is 6, our algorithm is easily parallelized to solve the relaxed dual problems, allowing the algorithm to scale nearly linearly with the size of the data set." ]
[ "spaced samples" ]
background
{ "title": "A global optimization algorithm for sparse mixed membership matrix factorization", "abstract": "Abstract. Mixed membership factorization is a popular approach for analyzing data sets that have within-sample heterogeneity. In recent years, several algorithms have been developed for mixed membership matrix factorization, but they only guarantee estimates from a local optimum. Here, we derive a global optimization (GOP) algorithm that provides a guaranteed -global optimum for a sparse mixed membership matrix factorization problem. We test the algorithm on simulated data and find the algorithm always bounds the global optimum across random initializations and explores multiple modes efficiently." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1405.3267
0705.4485
Then we have the following results for T
Notice that n − 2 ≤ m ≤ n 2 #REFR , so we split the proof into 2 parts based on the regime of m.
[ "where m = 2k n 2 − k .", "Proof: In the following, for clarity of notation, we have omitted the floor/ceiling symbols for numbers that are not integers but should be. Recall that", "where", ".", "The idea behind the proof is to bound log(T (m, p, q, 0)) with the dominant term log(T * (m, p, q, 0) when n is large and then use Lemma 7." ]
[ "The first case corresponds to m such that m ≥ n log log n. What is important is that n = o(m). We have", "Notice that each term in the double-sum can be upper-bounded by T * (m, p, q, 0) as defined in #OTHEREFR . Hence", "and using #OTHEREFR", "As m n ≥ log log n and m ≤ n 2 /4, notice that log(m) = o m n log(n) and", "The second case corresponds to m < n log log n." ]
[ "2 parts", "m." ]
background
{ "title": "Exact Recovery in the Stochastic Block Model", "abstract": "Abstract-The stochastic block model with two communities, or equivalently the planted bisection model, is a popular model of random graph exhibiting a cluster behavior. In the symmetric case, the graph has two equally sized clusters and vertices connect with probability p within clusters and q across clusters. In the past two decades, a large body of literature in statistics and computer science has focused on providing lower bounds on the scaling of | p − q| to ensure exact recovery. In this paper, we identify a sharp threshold phenomenon for exact recovery: if α = pn/ log(n) and β = qn/ log(n) are constant (with α > β), recovering the communities with high probability is possible if (α + β/2) − αβ > 1 and is impossible if (α + β/2) − αβ < 1. In particular, this improves the existing bounds. This also sets a new line of sight for efficient clustering algorithms. While maximum likelihood (ML) achieves the optimal threshold (by definition), it is in the worst case NP-hard. This paper proposes an efficient algorithm based on a semidefinite programming relaxation of ML, which is proved to succeed in recovering the communities close to the threshold, while numerical experiments suggest that it may achieve the threshold. An efficient algorithm that succeeds all the way down to the threshold is also obtained using a partial recovery algorithm combined with a local improvement procedure." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1707.01350
0705.4485
The model
We note that compared to original definition of MMSB #REFR we do not assume Dirichlet distribution of community memberships θ θ θ i .
[ "The row vector θ θ θ i ∈ [0, 1] K is a community membership vector for node i.", "We introduce community membership matrix Θ Θ Θ ∈ [0, 1] n×K and further assume that each row θ θ θ i of Θ Θ Θ is normalized ∑ K k=1 θ ik = 1.", "So, we can interpret θ θ θ i as a vector of probabilities for the node i to belong to one of the communities. Finally, in the matrix form we can write", "Let us further denote", "The considered model is directly related to several models in the literature." ]
[ "The other related models are OC-CAM [23] , where different normalization of community membership vectors is considered, and SBMO [10] , where only binary community memberships are allowed.", "Compared to the variant of MMSB considered in [15] we consider more general situation, where matrix B is allowed to be any full rank symmetric matrix.", "Finally, the ordinary stochastic block model is particular instance of our model, where each vector of community memberships θ θ θ i has exactly one non-zero entry (equal to one)." ]
[ "community memberships", "Dirichlet distribution" ]
background
{ "title": "Consistent Estimation of Mixed Memberships with Successive Projections", "abstract": "Abstract This paper considers the parameter estimation problem in Mixed Membership Stochastic Block Model (MMSB), which is a quite general instance of random graph model allowing for overlapping community structure. We present the new algorithm successive projection overlapping clustering (SPOC) which combines the ideas of spectral clustering and geometric approach for separable nonnegative matrix factorization. The proposed algorithm is provably consistent under MMSB with general conditions on the parameters of the model. SPOC is also shown to perform well experimentally in comparison to other algorithms." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1404.0221
0705.4485
Mixed-Membership Stochastic Blockmodel
A beta distribution can also be specified for Θ with the choice of a conjugate prior, otherwise it may be treated as a nuisance parameter #REFR .
[ "The MMSBM #OTHEREFR extends the SBM to allow actors membership to multiple groups depending on the actor with which they interact.", "Within this framework, each actor a i is assigned an individual mixing parameter τ i , denoting their propensity for group membership.", "Indicator vectors Z 1 ij and Z 2 ij (note the superscript indices) denote the group membership of actors a i (sender) and a j (receiver) during an interaction Y ij .", "Conditional on this additional model complexity, actor interaction is again modelled by a matrix Θ in a similar manner to the SBM.", "1 Choosing a Dirichlet prior distribution with hyperparameter δ ensures that each mixing parameter τ i also follows the same distribution." ]
[ "1 #OTHEREFR", "(2008) also introduce an additional sparsity parameter in order to distinguish between the case where interactions in the network are in general quite rare, and when non-interaction is due to particularly low-level connection between groups. We exclude this parameter from our analysis.", "• for i ∈ 1, . . . , N :", "• for g and h ∈ 1, . . . , G:", "• for i and j ∈ 1, . . . , N :" ]
[ "choice", "beta distribution" ]
background
{ "title": "Mixed-Membership of Experts Stochastic Blockmodel", "abstract": "Social network analysis is the study of how links between a set of actors are formed. Typically, it is believed that links are formed in a structured manner, which may be due to, for example, political or material incentives, and which often may not be directly observable. The stochastic blockmodel represents this structure using latent groups which exhibit different connective properties, so that conditional on the group membership of two actors, the probability of a link being formed between them is represented by a connectivity matrix. The mixed membership stochastic blockmodel (MMSBM) extends this model to allow actors membership to different groups, depending on the interaction in question, providing further flexibility. Attribute information can also play an important role in explaining network formation. Network models which do not explicitly incorporate covariate information require the analyst to compare fitted network models to additional attributes in a post-hoc manner. We introduce the mixed membership of experts stochastic blockmodel, an extension to the MMSBM which incorporates covariate actor information into the existing model. The method is illustrated with application to the Lazega Lawyers dataset. Model and variable selection methods are also discussed." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1904.05335
0705.4485
In addition, mixed membership SBM (MMSBM) has also been developed #REFR to deal with the situation, where nodes can have multi-label properties, namely belonging to overlapped clusters.
[ "Such ubiquitous features include small world phenomenon, power-law degree distributions, and overlapped cluster or community structures #OTHEREFR .", "Dozens of recent works have focused on modifying classical statistical models to improve the model capability and consequently improve the performance of statistical inference.", "SBM has been a significant statistical tool for latent cluster discovering in network data #OTHEREFR , #OTHEREFR , #OTHEREFR , and a variety of its extensions have also been developed.", "On the assumption that the nodes of a network are partitioned into different clusters and the existence of edges between pairwise nodes depends only on the clusters they belong to, Snijders and Nowicki #OTHEREFR first proposed using posterior inference to uncover such cluster structures.", "Incorporating nonparametric Bayesian techniques in SBM, with a Chinese restaurant process prior over the node partition imposed, #OTHEREFR addresses the issue of cluster number selection." ]
[ "Other main extension of SBM includes hierarchical SBM #OTHEREFR , integrating node attributes into SBM #OTHEREFR , dynamic infinite extension of MMSBM #OTHEREFR , and improving model scalability by stochastic variational methods #OTHEREFR , #OTHEREFR .", "Due to its computational flexibility and structural interpretation, SBM and its extension have been popularizing in a variety of network analysis tasks, e.g., uncovering social groups from relationship data #OTHEREFR , functional annotation of proteinprotein interaction networks #OTHEREFR , and network clustering #OTHEREFR .", "It has been long noticed that real networks exhibit a ubiquitous scale-free property, i.e., the distribution of node degrees following a power-law #OTHEREFR .", "For example, some nodes in the World Wide Web have far more connections than others and are recognized as \"hubs\".", "However, the traditional SBM is incapable to handle this naturally existing scale-free property in networks." ]
[ "overlapped clusters" ]
method
{ "title": "Adapting Stochastic Block Models to Power-Law Degree Distributions", "abstract": "Abstract-Stochastic block models (SBMs) have been playing an important role in modeling clusters or community structures of network data. But, it is incapable of handling several complex features ubiquitously exhibited in real-world networks, one of which is the power-law degree characteristic. To this end, we propose a new variant of SBM, termed power-law degree SBM (PLD-SBM), by introducing degree decay variables to explicitly encode the varying degree distribution over all nodes. With an exponential prior, it is proved that PLD-SBM approximately preserves the scale-free feature in real networks. In addition, from the inference of variational E-Step, PLD-SBM is indeed to correct the bias inherited in SBM with the introduced degree decay factors. Furthermore, experiments conducted on both synthetic networks and two real-world datasets including Adolescent Health Data and the political blogs network verify the effectiveness of the proposed model in terms of cluster prediction accuracies. Index Terms-EM algorithm, power-law degree distribution, stochastic block models (SBMs)." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1801.00132
0705.4485
Network Model and Basic Assumptions
If it is not known, then we can estimate it via a stochastic block model based method specified in #REFR .
[ "In real-world social networks, the nodes and edges of the network G correspond to users and their relationships, respectively, with little additional information being available.", "In the following, we thus consider both G and G to be undirected unweighted networks.", "Furthermore, we assume that the number of missing nodes M = |V M | is either known or can be approximated by standard methods for estimating the size of hidden or missing populations #OTHEREFR .", "To detect overlapping communities, we assume that social networks follow the affiliation graph model (AGM) #OTHEREFR , which states that the more communities a pair of nodes shares, the higher the probability that these two nodes are connected.", "The number of communities in the network is denoted by C and can also be assumed to be known." ]
[ "The AGM can be represented by a non-negative weight affiliation matrix F ∈ R (N +M )×C such that each element F uc represents the degree of membership of a node u ∈ (V ∪ V M ) to the community c.", "The probability P(u, v) of a connection between two nodes u and v then depends on the value of F and is given by", "vectors that correspond to nodes u and v, respectively #OTHEREFR . #OTHEREFR .", "The degree distribution can be estimated via least squares approximation just by taking at most 1% of the samples using a sublinear approach as indicated in #OTHEREFR .", "(a) Before node deletion (b) After node deletion Fig." ]
[ "stochastic block model" ]
method
{ "title": "Community Detection in Partially Observable Social Networks", "abstract": "The discovery of community structures in social networks has gained significant attention since it is a fundamental problem in understanding the networks' topology and functions. However, most social network data are collected from partially observable networks with both missing nodes and edges. In this paper, we address a new problem of detecting overlapping community structures in the context of such an incomplete network, where communities in the network are allowed to overlap since nodes belong to multiple communities at once. To solve this problem, we introduce KroMFac, a new framework that conducts community detection via regularized nonnegative matrix factorization (NMF) based on the Kronecker graph model. Specifically, from a generative parameter matrix acquired by the expectation-maximization (EM) algorithm, we first estimate the missing part of the network. As our major contribution to the proposed framework, to improve community detection accuracy, we then characterize and select influential nodes (which tend to have high degrees) by ranking, and add them to the existing graph. Finally, we uncover the community structures by solving the regularized NMF-aided optimization problem in terms of maximizing the likelihood of the underlying graph. Furthermore, adopting normalized mutual information (NMI), we empirically show superiority of our KroMFac approach over two baseline schemes." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1309.0787
0705.4485
Mixed Membership Model
In the mixed membership stochastic block model (MMSB), introduced by #REFR , the edges in a social network are related to the hidden communities of the nodes.
[ "We now consider a somewhat more complicated model which can also be learnt using tensor decomposition techniques." ]
[ "A tensor decomposition technique for learning MMSB was derived in #OTHEREFR .", "Let n denote the number of nodes, k the number of communities and G ∈ R n×n the adjacency matrix of the graph.", "Each node i ∈ [n] has an associated community membership vector π i ∈ R k , which is a latent variable, and the vectors are contained in a simplex, i.e.,", "where the notation [n] denotes the set {1, . . . , n}.", "Membership vectors are sampled from the Dirichlet distribution π u iid ∼ Dir(α), ∀u ∈ [n] with parameter vector α ∈ R k + where α 0 := i∈[k] α i ." ]
[ "hidden communities", "mixed membership stochastic" ]
background
{ "title": "Fast Detection of Overlapping Communities via Online Tensor Methods", "abstract": "We present a fast tensor-based approach for detecting hidden overlapping communities under the Mixed Membership Stochastic Blockmodel (MMSB). We present two implementations, viz., a GPU-based implementation which exploits the parallelism of SIMD architectures and a CPU-based implementation for larger datasets, wherein the GPU memory does not suffice. Our GPU-based implementation involves a careful optimization of storage, data transfer and matrix computations. Our CPU-based implementation involves sparse linear algebraic operations which exploit the data sparsity. We use stochastic gradient descent for multilinear spectral optimization and this allows for flexibility in the tradeoff between node sub-sampling and accuracy of the results. We validate our results on datasets from Facebook, Yelp and DBLP where ground truth is available, using notions of p-values and false discovery rates, and obtain high accuracy for membership recovery. We compare our results, both in terms of execution time and accuracy, to the state-of-the-art algorithms such as the variational method, and report many orders of magnitude gain in the execution time. The tensor method is also applicable for unsupervised learning of a wide range of latent variable models, and we also demonstrate efficient recovery of topics from the Nytimes dataset." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1412.4857
0705.4485
Introduction
In the mixed membership model #REFR , each individual may belong to more than one community.
[ "In a stochastic block model, the observed network is often recorded in the form of an n × n adjacency matrix A, representing the presence/absence of pairwise interactions among n individuals in a population of interest.", "The model assumes that (i) the individuals are partitioned into K disjoint communities, and (ii) given the memberships, the upper diagonal entries of A are independent Bernoulli random variables, where the parameter E(A ij ) depends only on the memberships of nodes i and j.", "Such a model naturally captures the community structures commonly observed in complex networks, and has close connection to nonparametric exchangeable random graphs #OTHEREFR .", "The stochastic block model can be made more realistic by incorporating additional parameters to better approximate real world network data.", "For example, #OTHEREFR incorporated individual node activeness into the stochastic block model to allow for arbitrary degree distributions." ]
[ "In this paper we develop a goodness-of-fit test for stochastic block models.", "Given an adjacency matrix A and a positive integer K 0 , we test whether A can be adequately fitted by a stochastic block model with K 0 communities.", "Our test statistic is the largest singular value of a residual matrix obtained by removing the estimated block mean effect from the observed adjacency matrix.", "Intuitively, if A is generated by a stochastic block model and the block mean effect is estimated appropriately, the residual matrix will approximate a generalized Wigner matrix: a symmetric random matrix with independent mean zero upper diagonal entries.", "Our first contribution is the asymptotic null distribution of the test statistic (Theorem 1)." ]
[ "mixed membership model" ]
background
{ "title": "A goodness-of-fit test for stochastic block models", "abstract": "The stochastic block model is a popular tool for studying community structures in network data. We develop a goodness-of-fit test for the stochastic block model. The test statistic is based on the largest singular value of a residual matrix obtained by subtracting the estimated block mean effect from the adjacency matrix. Asymptotic null distribution is obtained using recent advances in random matrix theory. The test is proved to have full power against alternative stochastic block models with finer structures. These results naturally lead to a consistent sequential testing estimate of the number of communities." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1501.00199
0705.4485
RELATED WORK
All these models are closely related to the mixed-membership stochastic blockmodels of #REFR . Co-clustering.
[ "Following a similar intuition as AC-CAMS but different perspective and focus, #OTHEREFR extended the IBP to handle k > 2 for link prediction tasks on binary graphs.", "Our work differs in its focus on general, real-valued matrices, its application of co-clustering, and its significantly simpler parameterization.", "Co-clustering was incorporated into a factorization approach to recommendation in #OTHEREFR .", "While the co-clustering improved modeling accuracy, it did not reduce the model complexity of the underlying factorization.", "Finally, #OTHEREFR proposed a factorization model based on a Dirichlet process over users and columns." ]
[ "The technique was originally used for understanding the clustering of rows and columns of a matrix rather than for matrix approximation or completion #OTHEREFR .", "This formulation was well suited for biological tasks but evolved to cover a wider variety of objectives #OTHEREFR .", "#OTHEREFR defined a soft co-clustering objective akin to a factorization model.", "Recent work has defined a Bayesian model for co-clustering focused on matrix modeling #OTHEREFR .", "#OTHEREFR focuses on exploiting co-clustering ensembles, but do so by finding a single consensus co-clustering." ]
[ "mixed-membership stochastic blockmodels" ]
background
{ "title": "ACCAMS: Additive Co-Clustering to Approximate Matrices Succinctly", "abstract": "Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches in matrix completion. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle, the fact that our model is more concise and yet just as accurate as more complex models suggests that it better captures the latent preferences and decision making processes present in the real world. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results for matrix completion on Netflix at a fraction of the model complexity." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1707.01350
0705.4485
The model
We note that this model is very close to mixed membership stochastic block model #REFR , but without an assumption on Dirichlet distribution of community memberships θ i .
[ "The row vector θ i ∈ [0, 1] K is a community membership vector for node i.", "We introduce community membership matrix Θ ∈ [0, 1] n×K and further assume that each row θ i of Θ is normalized K k=1 θ ik = 1.", "So, we can interpret θ i as a vector of probabilities for the node i to belong to one of the communities. Finally, in the matrix form we can write", "Let us further denotē", "The considered model is directly related to several models in the literature." ]
[ "It is also can be considered the particular case of OCCAM model (Zhang et al., 2014) .", "Also our definition of SBMO generalizes the one from (Kaufmann et al., 2016) , where only binary community memberships are considered.", "Compared to the model considered in #OTHEREFR we consider more general situation, where matrix B is allowed to be any full rank symmetric matrix.", "Finally, the ordinary stochastic block model is particular instance of our model, where each vector of community memberships θ i has exactly one non-zero entry (equal to one)." ]
[ "mixed membership stochastic" ]
background
{ "title": "Consistent parameter estimation in general stochastic block models with overlaps", "abstract": "This paper considers the parameter estimation problem in Stochastic Block Model with Overlaps (SBMO), which is a quite general instance of random graph model allowing for overlapping community structure. We present the new algorithm successive projection overlapping clustering (SPOC) which combines the ideas of spectral clustering and geometric approach for separable non-negative matrix factorization. The proposed algorithm is provably consistent under SBMO with general conditions on the parameters of the model. SPOC is also shown to perform well experimentally in comparison to other algorithms." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1904.06738
0705.4485
Assumptions and Identifiability
Our basic unit of length will be the bound on the directional variance, σ, see #REFR .
[ "Let M be a d × k matrix with the vertices of the simplex K as its columns.", "We assume there are n unknown points P ·,1 , P ·,2 , . . .", ", P ·,n ∈ CH(M), where, P ·,j is the point in K = CH(M) whose perturbed version is data point A ·,j ." ]
[ "So, in words, σ 2 is the maximum over all directions of the means squared perturbation of A ·,j from P ·,j .", "If we had a stochastic model of data with E(A ·,j | P ·,j ) = P ·,j , σ 2 would be the maximum empirical variance in any direction. We don't assume knowledge of σ.", "As stated in the introduction, we make two main assumptions: Extreme Data and Well-Separatednedss.", "We state the assumptions formally after the following basic Lemma.", "This just follows from the fact that |A ·,S − P ·,S | = 1 |S| |(A − P)1 S | and |1 S | = |S|." ]
[ "directional variance" ]
background
{ "title": "Finding a latent k-simplex in O(k . nnz(data)) time via Subset Smoothing", "abstract": "The core problem in many Latent Variable Models, widely used in Unsupervised Learning is to find a latent k−simplex K in R d given perturbed points from it, many of which lie far outside the simplex. This problem was stated in [2] as an open problem. We address this problem under two deterministic assumptions which replace varied stochastic assumptions specific to relevant individual models. Our first contribution is to show that the convex hull K ′ of the n δn points obtained by averaging all δn subsets of the data points (δ to be specified) is close to K. We call this \"subset-smoothing\". While K ′ can have exponentially many vertices, it is easily seen to have a polynomial time Optimization Oracle which in fact runs in time O(nnz(data)). This is the starting point for our algorithm. The algorithm is simple: it has k stages in each of which we use the oracle to find max |u · x| over x ∈ K ′ for a carefully chosen u; the optimal x is an approximation to a new vertex of K. The simplicity does not carry over to the proof of correctness. The proof is involved and uses existing and new tools from Numerical Analysis, especially angles between singular spaces of close-by matrices. However, the simplicity of the algorithm, especially the fact the only way we use the data is to do matrix-vector products leads to the claimed time bound. This matches the best known algorithms in the special cases and is better when the input is sparse as indeed is the case in many applications. Our algorithm applies to many special cases, including Topic Models, Approximate Non-negative Matrix factorization, Overlapping community Detection and Clustering." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1508.06675
0705.4485
Example 2.3 (Stochastic block model on k blocks). Let Ω = [k]
To express the mixed membership block model of #REFR as a W -random graph, we define Ω to be the k dimensional simplex ∆ k and equip it with a Dirichlet distribution with some parameters α = (α 1 , . . . , α k ).
[ "Then we define W by first partitioning [0, 1] into k adjacent intervals of lengths p 1 , . . .", ", p k , and then setting W equal to β ij on I i × I j .", "Note that the random graphs generated by W and W are equal in distribution.", "We denote the graphon W by W[p, B], or by W[B] if all the probabilities p i are equal.", "(We will also sometimes abuse notation by identifying it with W , when this does not seem likely to cause confusion.) Example 2.4 (Mixed membership stochastic block model)." ]
[ "In other words, the probability density at (p 1 , . . . , p k ) is proportional to", "Given a symmetric matrix (β ij ) of non-negative numbers, we then define", "As in the stochastic block model, β ij describes the affinity between communities i and j, but now each vertex is assigned a probability distribution p over the set of communities (rather than being assigned a single community)." ]
[ "W -random graph" ]
background
{ "title": "Consistent nonparametric estimation for heavy-tailed sparse graphs", "abstract": "Abstract. We study graphons as a non-parametric generalization of stochastic block models, and show how to obtain compactly represented estimators for sparse networks in this framework. Our algorithms and analysis go beyond previous work in several ways. First, we relax the usual boundedness assumption for the generating graphon and instead treat arbitrary integrable graphons, so that we can handle networks with long tails in their degree distributions. Second, again motivated by real-world applications, we relax the usual assumption that the graphon is defined on the unit interval, to allow latent position graphs where the latent positions live in a more general space, and we characterize identifiability for these graphons and their underlying position spaces. We analyze three algorithms. The first is a least squares algorithm, which gives an approximation we prove to be consistent for all square-integrable graphons, with errors expressed in terms of the best possible stochastic block model approximation to the generating graphon. Next, we analyze a generalization based on the cut norm, which works for any integrable graphon (not necessarily square-integrable). Finally, we show that clustering based on degrees works whenever the underlying degree distribution is atomless. Unlike the previous two algorithms, this third one runs in polynomial time." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1301.2659
0705.4485
I. INTRODUCTION
Finally, some recent approaches consider non-boolean latent variables: cluster assignments are not strong and a vertex has an affiliation degree to each cluster #REFR .
[ "More recent deterministic approaches have focused on optimizing criteria that quantify how well the image graph summarizes the graph #OTHEREFR (see e.g. #OTHEREFR for details on such criteria). Other approaches include stochastic blockmodeling.", "In those generative models, a latent cluster indicator variable is associated to each actor/vertex.", "Conditionally to their latent variables, the probability of observing an edge between two actors follows some standard distribution (a Bernoulli distribution in the simplest case) whose parameters depend only on the pair of clusters designated by the latent variables.", "In early approaches, the number of clusters is chosen by the user #OTHEREFR .", "More recent techniques determine automatically the number of clusters using a Dirichlet Process #OTHEREFR ." ]
[ "Studies on evolving graphs are quite recent.", "The majority of the methods define an evolving graph as a sequence of static snapshots.", "In some approaches, the times segments are obtained by making an agglomerative hierarchical grouping of the snapshots and intervals using a similarity measure #OTHEREFR .", "As for stochastic blockmodeling, an adaptation of the mixed membership is proposed that studies the evolution of the latent variables over time #OTHEREFR .", "Graphscope #OTHEREFR is a twostage method dedicated to simple bipartite graphs that tracks structures within time-evolving graphs." ]
[ "cluster assignments", "non-boolean latent variables" ]
background
{ "title": "A Triclustering Approach for Time Evolving Graphs", "abstract": "Abstract-This paper introduces a novel technique to track structures in time evolving graphs. The method is based on a parameter free approach for three-dimensional co-clustering of the source vertices, the target vertices and the time. All these features are simultaneously segmented in order to build time segments and clusters of vertices whose edge distributions are similar and evolve in the same way over the time segments. The main novelty of this approach lies in that the time segments are directly inferred from the evolution of the edge distribution between the vertices, thus not requiring the user to make an a priori discretization. Experiments conducted on a synthetic dataset illustrate the good behaviour of the technique, and a study of a real-life dataset shows the potential of the proposed approach for exploratory data analysis." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1907.12528
0705.4485
Overview of Main Results
From #REFR , we can see that, in the case of graphons, G n,b follows the same distribution as G b .
[ "Now that we have introduced the sparse graphon model, we may provide some intuition as to why a general theory is possible for this model.", "At a high level, subsampling often works for many data generating processes because a size b subsample can itself be viewed as a size b sample from the population.", "If the size b subsamples are not too dependent, then we may aggregate them to form a \"good\" empirical estimate of the size b sampling distribution.", "Furthermore, if the functional of interest converges in distribution, then the sampling distributions of the size b subsamples and the size n sample should be close asymptotically as b grows with n.", "Let G n,b denote an induced subgraph formed from b vertices of G n ." ]
[ "Therefore, if the induced subgraphs are not too dependent, then intuitively, subsampling should be valid under very general conditions.", "We will show that this is indeed the case in Section 2.2.", "For sparse graphons, observe that G n,b generally does not follow the same distribution as G b .", "It will typically be the case that the induced subgraph is sparser, since typically ρ b > ρ n .", "However, a general theory is still possible for sparse graphons." ]
[ "graphons" ]
background
{ "title": "Subsampling Sparse Graphons Under Minimal Assumptions", "abstract": "We establish a general theory for subsampling network data generated by the sparse graphon model. In contrast to previous work for network data, we demonstrate validity under minimal assumptions; the main requirement is weak convergence of the functional of interest. We study the properties of two subsampling procedures: vertex subsampling, and p-subsampling. For the first procedure, we prove validity under the mild condition that the number of subsampled vertices is o(n). For the second, we establish validity under analogous conditions on the expected subsample size. For both procedures, we also establish conditions under which uniform validity holds. Under appropriate sparsity conditions, we also derive limiting distributions for the nonzero eigenvalues of the adjacency matrix of a low rank sparse graphon. Our weak convergence result immediately yields the validity of subsampling for the nonzero eigenvalues under suitable assumptions." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1411.3675
0705.4485
PROBLEM FORMULATION
Note that this is different from Mixed Membership Stochastic Block Models #REFR where nodes are mapped onto simplex.
[ "(Temporal Latent Space Inference) Given a dynamic social network G = (G0, G1,. . .", ", Gt), we aim to find a k-dimension latent space representation at each time stamp Zt that minimizes the quadratic loss with temporal regularization:", "where λ is a regularization parameter, and the term (1 − Zt(u)Zt+1(u) T ) penalizes node u for suddenly changing her latent position.", "Note that when computing the quadratic loss Wt − ZtZ T t 2 F , we ignore all of the diagonal entries.", "In the above model, the latent representation of each node corresponds to a point on the surface of a unit hyper-sphere." ]
[ "In practice, we find that sphere modeling gives us a clearer boundary between linked pairs and non-linked pairs when we project all pairs of nodes into the latent space.", "In addition, we impose the constraints Zt ≥ 0 not only because the non-negativity establishes the duality between our modeling and Non-negative Matrix Factorization, but also because it gives latent space an intuitive parts-based interpretation.", "As suggested by Lee and Seung #OTHEREFR , the non-negative constraints lead to a parts-based representation because they allow only additive, not subtractive or combinations.", "In the facial image example #OTHEREFR , each dimension of latent space with non-negative constraints corresponds to a part of faces such as eyes; while in the social network, each node represents a user and each dimension of latent space corresponds to a part of users' attributes such as ages. Link prediction. Given that we have inferred Z1, . . . , Zt by optimizing Eq.", "1, our goal is to predict the adjacency matrix Wt+1 at the next time stamp t+1. The most natural estimator is the conditional expectation" ]
[ "nodes", "Mixed Membership Stochastic" ]
background
{ "title": "Technical report : Scalable Link Prediction in Dynamic Networks via Non-Negative Matrix Factorization", "abstract": "We propose a scalable temporal latent space model for link prediction in dynamic social networks, where the goal is to predict links over time based on a sequence of interactions from previous graph snapshots. The model assumes that each user lies in an unobserved latent space and interactions are more likely to form between similar users in the latent space representation. In addition, the model allows each user to gradually move its position in the latent space as the network structure evolves over time. We present a global optimization algorithm to effectively infer the temporal latent space, with a quadratic convergence rate. Two alternative optimization algorithms with local and incremental updates are also proposed, allowing the model to scale to larger networks without compromising prediction accuracy. Empirically, we demonstrate that our model, when evaluated on a number of real-world dynamic networks, significantly outperforms existing approaches for temporal link prediction in terms of both scalability and predictive power." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1904.02016
0705.4485
Stochastic Blockmodels
Rather than restrict each node to a single cluster, the Mixed Membership Stochastic Blockmodel #REFR associates each node with a distribution over clusters, allowing nodes to perform several social roles.
[ "Typically, the network is assumed to be binary, and the interactions are modeled as Bernoulli random variables.", "In a Bayesian setting, we place conjugate (in the binary case, beta) priors on the λ k, , and a Dirichletmultinomial prior on the community memberships.", "A number of variants to the basic stochastic blockmodel have been proposed.", "#OTHEREFR uses a gamma/Poisson link in place of a beta/Bernoulli, to obtain distributions over integer-valued networks, and also incorporates a per-node parameter that allows nodes in the same community to have different degree distribution.", "The Infinite Relational Model #OTHEREFR allows a potentially infinite number of communities, with membership probabilities distributed according to a Dirichlet process." ]
[ "In this work, we adopt the Poisson links introduced by #OTHEREFR to capture the communication volume between nodes.", "Our model could be extended to incorporate the nonparametric and mixed membership behavior described above; however as we discuss in Section 6, this would significantly increases the computational cost of the model and we leave this for future work." ]
[ "Mixed Membership Stochastic" ]
background
{ "title": "Stochastic Blockmodels with Edge Information", "abstract": "Stochastic blockmodels allow us to represent networks in terms of a latent community structure, often yielding intuitions about the underlying social structure. Typically, this structure is inferred based only on a binary network representing the presence or absence of interactions between nodes, which limits the amount of information that can be extracted from the data. In practice, many interaction networks contain much more information about the relationship between two nodes. For example, in an email network, the volume of communication between two users and the content of that communication can give us information about both the strength and the nature of their relationship. In this paper, we propose the Topic Blockmodel, a stochastic blockmodel that uses a count-based topic model to capture the interaction modalities within and between latent communities. By explicitly incorporating information sent between nodes in our network representation, we are able to address questions of interest in real-world situations, such as predicting recipients for an email message or inferring the content of an unopened email. Further, by considering topics associated with a pair of communities, we are better able to interpret the nature of each community and the manner in which it interacts with other communities." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1104.2930
0705.4485
Proof of Theorem 1
However, the addition of a noise feature would increase the value of (σ * d ) #REFR , it follows that κ will be increased by adding a noise feature.
[ "SS W and SS B can be calculated as follows.", "If we assume Σ = I p×p is always true during the growth of the clustering vector (this holds if duplicated features are excluded), then", "Without loss of generality, let I = {1, 2, ..., d − 1} and let the noise feature be the d th feature.", "By the equivalence, in the population, of K-means clustering and the optimal clustering rule h * (Lemma 1 in Section 7.1) for a mixture of two spherical Gaussians, K-means clustering assigns", "which is equivalent to 2 ." ]
[]
[ "noise feature" ]
background
{ "title": "Cluster Forests", "abstract": "Inspired by Random Forests (RF) in the context of classification, we propose a new clustering ensemble method-Cluster Forests (CF). Geometrically, CF randomly probes a high-dimensional data cloud to obtain \"good local clusterings\" and then aggregates via spectral clustering to obtain cluster assignments for the whole dataset. The search for good local clusterings is guided by a cluster quality measure κ. CF progressively improves each local clustering in a fashion that resembles the tree growth in RF. Empirical studies on several real-world datasets under two different performance metrics show that CF compares favorably to its competitors. Theoretical analysis shows that the κ criterion is shown to grow each local clustering in a desirable way-it is \"noise-resistant.\" A closed-form expression is obtained for the mis-clustering rate of spectral clustering under a perturbation model, which yields new insights into some aspects of spectral clustering." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1412.3432
0705.4485
Introduction
The mixed membership stochastic block model #REFR draws membership vectors Z i· from a Dirichlet prior.
[ "While for specific likelihoods one can develop methods that are both fast and more accurate than spectral clustering, such as the pseudo-likelihood #OTHEREFR , in general spectral methods remain the most scalable option available.", "While the majority of the existing models and algorithms for community detection focus on discovering non-overlapping communities, there has been a growing interest in exploring the overlapping scenario, although both extending the existing models to the overlapping case and developing brand new models remain challenging.", "Like methods for non-overlapping community detection, most existing approaches for detecting overlapping communities can be categorized as either algorithmic or model-based methods.", "Model-based methods focus on specifying how node community memberships determine edge probabilities.", "For example, the overlapping stochastic block model (OSBM) #OTHEREFR ) extends the SBM by allowing the entries of the membership matrix Z to be independent Bernoulli variables, thus allowing multiple \"1\"s in one row, or all \"0\"s." ]
[ "The membership vector is drawn again to generate every edge, instead of being fixed for the node, so the community membership for node i varies depending on which node j it is interacting with.", "The \"colored edges\" model #OTHEREFR , sometimes referred to as the Ball-Karrer-Newman model or BKN, allows continuous community membership by relaxing the binary Z to a matrix with non-negative entries (with some normalization constraints for identifiability), and discarding the matrix B.", "The Bayesian nonnegative matrix factorization model #OTHEREFR is related to the model but with notable differences.", "Algorithmic methods for overlapping community detection mostly rely on local greedy searches and intuitive criteria.", "Current approaches include detecting each community separately by maximizing a local measure of goodness of the estimated community #OTHEREFR and updating an initial estimate of the community membership by neighborhood vote #OTHEREFR ." ]
[ "mixed membership stochastic" ]
background
{ "title": "Detecting Overlapping Communities in Networks Using Spectral Methods", "abstract": "Community detection is a fundamental problem in network analysis. In practice, communities often overlap, which makes the problem more challenging. Here we propose a general, flexible, and interpretable generative model for overlapping communities, which can be viewed as generalizing several previous models in different ways. We develop an efficient spectral algorithm for estimating the community memberships, which deals with the overlaps by employing the K-medians algorithm rather than the usual K-means for clustering in the spectral domain. We show that the algorithm is asymptotically consistent when networks are not too sparse and the overlaps between communities not too large. Numerical experiments on both simulated networks and many real social networks demonstrate that our method performs well compared to a number of benchmark methods for overlapping community detection." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1302.2671
0705.4485
Spatial Gaussian Mixture Model (GMM)
In this sense, the model borrows from the mixed membership stochastic block model #REFR , which assumes that the agents can interact while assuming different roles.
[ "Namely, we assume that the interaction of specific pair is spatially distributed according to a pair-specific Gaussian mixture model:", "In Equation 6 , C is the number of components, w c ij is the weight of c-th cluster for pair i, j.", "More weights on specific cluster on space means more chances of appearance within the cluster.", "For simplicity, the dynamics of the weights over time has been ignored.", "We would like to note that the use of Gaussian mixtures rather than a single Gaussian model is justified by the observation that interactions among the same pair might have different modalities (e.g., school, or movies, etc.)." ]
[ "Equations 2-6 complete the definition of our latent point process model.", "Next we describe our approach for efficient learning and inference with LPPM." ]
[ "mixed membership stochastic" ]
method
{ "title": "Latent Self-Exciting Point Process Model for Spatial-Temporal Networks", "abstract": "Social network data is generally incomplete with missing information about nodes and their interactions. Here we propose a spatialtemporal latent point process model that describes geographically distributed interactions between pairs of entities. In contrast to most existing approaches, we assume that interactions are not fully observable, and certain interaction events lack information about participants. Instead, this information needs to be inferred from the available observations. We develop an efficient approximate algorithm based on variational expectationmaximization to infer unknown participants in an event given the location and the time of the event. We validate the model on synthetic as well as real-world data, and obtain very promising results on the identityinference task. We also use our model to predict the timing and participants of future events, and demonstrate that it compares favorably with a baseline approach." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1711.05610
0705.4485
Stochastic block models
However, the results contained herein translate immediately to the mixed membership SBM setting #REFR ; details are omitted for brevity.
[ "With no restrictions on P , ER(P ) random graphs can be viewed as n-block SBMs and are the most general edge-independent random graph model.", "The latent community structure inherent to SBMs makes them a natural model for use in the traditional vertex nomination framework.", "Recall the traditional VN task: given a community of interest in a network and some examples of vertices that are/are not part of the community of interest, vertex nomination seeks to rank the remaining vertices in the network into a nomination list, with those vertices from the community of interest (ideally) concentrating at the top of the nomination list.", "As a result, previous work on VN consistency #OTHEREFR has been posed within the SBM framework, with the optimal scheme only obtaining its optimality for SBMs.", "We note that we consider herein the SBM setting where communities are disjoint and each vertex can only belong to a single community." ]
[]
[ "mixed membership" ]
result
{ "title": "On consistent vertex nomination schemes", "abstract": "Given a vertex of interest in a network G 1 , the vertex nomination problem seeks to find the corresponding vertex of interest (if it exists) in a second network G 2 . A vertex nomination scheme produces a rank list of the vertices in G 2 , where the vertices are ranked by how likely they are judged to be the corresponding vertex of interest in G 2 . The vertex nomination problem and related information retrieval tasks have attracted much attention in the machine learning literature, with numerous applications in social and biological networks. However, the current framework has often been confined to a comparatively small class of network models, and the concept of statistically consistent vertex nomination schemes has been only shallowly explored. In this paper, we extend the vertex nomination problem to a very general statistical model of graphs. Further, drawing inspiration from the longestablished classification framework in the pattern recognition literature, we provide definitions for the key notions of Bayes optimality and consistency in our extended vertex nomination framework, including a derivation of the Bayes optimal vertex nomination scheme. In addition, we prove that no universally consistent vertex nomination schemes exist. Illustrative examples are provided throughout." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1805.10043
0705.4485
Introduction
Although a variety of NE methods have been proposed, two major limitations exist in previous NE studies: #REFR Structure preservation. Previous studies applied random walk to learn representations.
[ "It has been reported that using embedded node representations can achieve promising performance on many network analysis tasks #OTHEREFR .", "Previous NE techniques mainly relied on eigendecomposition #OTHEREFR , but the high computational complexity of eigendecomposition makes it difficult to apply in real-world networks.", "With the fast development of neural network techniques, unsupervised embedding algorithms have been widely used in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors in the learned embedding space, e.g., word2vec #OTHEREFR and GloVe #OTHEREFR .", "By drawing an analogy between random walks on networks and word sequences in text, DeepWalk #OTHEREFR learns node representations based on random walks using the same mechanism of word2vec.", "Afterwards, a sequence of studies have been conducted to improve DeepWalk either by extending the definition of neighborhood to higher-order proximity #OTHEREFR or incorporating more information for node representations such as attributes #OTHEREFR and heterogeneity #OTHEREFR ." ]
[ "However, random walk based embedding strategies can only capture local structural information, i.e., first-order and higher-order proximity within the neighborhood of the target node #OTHEREFR and fail in capturing the global structural information, e.g., structural or regular equivalence #OTHEREFR .", "An example of global structural information and local structural information is shown in Fig.", "1 and empirical evidence based on this example for illustrating this limitation will be shown in Section 5.1. (2) Uncertainty modeling.", "Previous methods represent a node into a point vector in the learned embedding space. However, real-world networks may be noisy and imbalanced.", "Point vector representations are deterministic #OTHEREFR and are not capable of modeling the uncertainties of node representations." ]
[ "Structure preservation", "random walk" ]
method
{ "title": "struc2gauss: Structure Preserving Network Embedding via Gaussian Embedding", "abstract": "Abstract Network embedding (NE) is playing a principal role in network mining, due to its ability to map nodes into efficient low-dimensional embedding vectors. However, two major limitations exist in state-of-the-art NE methods: structure preservation and uncertainty modeling. Almost all previous methods represent a node into a point in space and focus on the local structural information, i.e., neighborhood information. However, neighborhood information does not capture the global structural information and point vector representation fails in modeling the uncertainty of node representations. In this paper, we propose a new NE framework, struc2gauss, which learns node representations in the space of Gaussian distributions and performs network embedding based on global structural information. struc2gauss first employs a given node similarity metric to measure the global structural information, then generates structural context for nodes and finally learns node representations via Gaussian embedding. Different structural similarity measures of networks and energy functions of Gaussian embedding are investigated. Experiments conducted on both synthetic and real-world data sets demonstrate that struc2gauss effectively captures the global structural information while state-of-the-art network embedding methods fails to, outperforms other meth- ods on the structure-based clustering task and provides more information on uncertainties of node representations." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1805.10043
0705.4485
Structural Similarity
For example, Mixed Membership Stochastic Blockmodel #REFR infers the role distribution of each node using the Bayesian generative model.
[ "Network clusters can be based on either global or local structural information.", "Graph clustering based on global structural information is the problem of role discovery #OTHEREFR .", "In social science research, roles are represented as concepts of equivalence #OTHEREFR .", "Graph-based methods and feature-based methods have been proposed for this task.", "Graph-based methods take nodes and edges as input and directly partition nodes into groups based on their structural patterns." ]
[ "Feature-based methods first transfer the original network into feature vectors and then use clustering methods to group nodes.", "For example, RolX #OTHEREFR employs ReFeX #OTHEREFR to extract features of networks and then uses non-negative matrix factorization to cluster nodes.", "Local structural information based clustering corresponds to the problem of community detection #OTHEREFR .", "A community is a group of nodes that interact with each other more frequently than with those outside the group. Thus, it captures only local connections between nodes." ]
[ "role distribution", "Mixed Membership Stochastic" ]
method
{ "title": "struc2gauss: Structure Preserving Network Embedding via Gaussian Embedding", "abstract": "Abstract Network embedding (NE) is playing a principal role in network mining, due to its ability to map nodes into efficient low-dimensional embedding vectors. However, two major limitations exist in state-of-the-art NE methods: structure preservation and uncertainty modeling. Almost all previous methods represent a node into a point in space and focus on the local structural information, i.e., neighborhood information. However, neighborhood information does not capture the global structural information and point vector representation fails in modeling the uncertainty of node representations. In this paper, we propose a new NE framework, struc2gauss, which learns node representations in the space of Gaussian distributions and performs network embedding based on global structural information. struc2gauss first employs a given node similarity metric to measure the global structural information, then generates structural context for nodes and finally learns node representations via Gaussian embedding. Different structural similarity measures of networks and energy functions of Gaussian embedding are investigated. Experiments conducted on both synthetic and real-world data sets demonstrate that struc2gauss effectively captures the global structural information while state-of-the-art network embedding methods fails to, outperforms other meth- ods on the structure-based clustering task and provides more information on uncertainties of node representations." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1705.09439
0705.4485
Settings
Perplexity is a widely used measure to compare the performance of statistical models #REFR and the lower value represents the better performance.
[ "In terms of hyperparameters, in line with other topic modeling work, we set α = To compare the performance of the session model and the SWA model, we use the perplexities of the two models." ]
[ "The perplexity of each model on the test data is given by:", "where R test u and V test ur represent the number of u's sessions and the number of logs in rth session of u in the test data, respectively.", "The p(a urj ) is computed based on the estimated parameters obtained by Equation (6) and (7) as follows:", "In terms of the number of topics, we compute the perplexity for K = 5, 10, 20, 30, 40, 50, 100, 200, and 300. Figure 2 shows the perplexity for each dataset.", "In any dataset, regardless of the amount of training data and the number of topics, the SWA model outperformed the session model." ]
[ "statistical models" ]
method
{ "title": "Taste or Addiction?: Using Play Logs to Infer Song Selection Motivation", "abstract": "Abstract. Online music services are increasing in popularity. They enable us to analyze people's music listening behavior based on play logs. Although it is known that people listen to music based on topic (e.g., rock or jazz), we assume that when a user is addicted to an artist, s/he chooses the artist's songs regardless of topic. Based on this assumption, in this paper, we propose a probabilistic model to analyze people's music listening behavior. Our main contributions are threefold. First, to the best of our knowledge, this is the first study modeling music listening behavior by taking into account the influence of addiction to artists. Second, by using real-world datasets of play logs, we showed the effectiveness of our proposed model. Third, we carried out qualitative experiments and showed that taking addiction into account enables us to analyze music listening behavior from a new viewpoint in terms of how people listen to music according to the time of day, how an artist's songs are listened to by people, etc. We also discuss the possibility of applying the analysis results to applications such as artist similarity computation and song recommendation." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1607.00084
0705.4485
Introduction
We focus on the popular mixed membership stochastic blockmodel (MMSB) #REFR where each node i, i ∈ [n] has a discrete probability distribution θ i = (θ i1 , . . . , θ iK ) over K communities.
[ "Community detection is a fundamental problem in network analysis.", "It has been widely used in a diverse set of applications ranging from link prediction in social networks #OTHEREFR , predicting protein-protein or protein-DNA interactions in biological networks #OTHEREFR , to network protocol design such as data forwarding in delay tolerant networks #OTHEREFR .", "Traditional community detection assumes that every node in the network belongs to exactly one community, but many practical settings call for greater flexibility.", "For instance, individuals in a social network may have multiple interests, and hence are best described as members of multiple interest-based communities." ]
[ "The probability of linkage between nodes i and j depends on the degree of overlap between their communities:" ]
[ "stochastic blockmodel" ]
background
{ "title": "On Mixed Memberships and Symmetric Nonnegative Matrix Factorizations", "abstract": "The problem of finding overlapping communities in networks has gained much attention recently. Optimization-based approaches use non-negative matrix factorization (NMF) or variants, but the global optimum cannot be provably attained in general. Model-based approaches, such as the popular mixedmembership stochastic blockmodel or MMSB [1], use parameters for each node to specify the overlapping communities, but standard inference techniques cannot guarantee consistency. We link the two approaches, by (a) establishing sufficient conditions for the symmetric NMF optimization to have a unique solution under MMSB, and (b) proposing a computationally efficient algorithm called GeoNMF that is provably optimal and hence consistent for a broad parameter regime. We demonstrate its accuracy on both simulated and real-world datasets." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
0910.2034
0705.4485
A mixture model for networks.
Consequently, the hidden variables of their model can stand for more than one group for one node, whereas MixNet only considers one label per node. #REFR also model the sparsity of the network.
[ "Many strategies have been considered to construct models for clustering in networks.", "Variations mainly concern the nature of the link between nodes and the definition of nodes' memberships.", "For instance, the stochastic blockstructure model #OTHEREFR ; #OTHEREFR ] considers links that are dyads (X ij , X ji ), whereas MixNet considers a model on edges only.", "Consequently, MixNet implicitly assumes the independence of X ij and X ji conditionally on the latent structure.", "As for the definition of the label variables, the Mixed Membership Stochastic Blockmodel (MMSB) has been proposed to describe the interactions between objects playing multiple roles #OTHEREFR ]." ]
[ "This could be done in the context of MixNet by introducing a Dirac mass on zero for the conditional distribution of edges.", "Differences among approaches also concern the statistical framework that defines subsequent optimization strategies.", "The Bayesian setting has been a framework chosen by many authors, as it allows the integration of prior information and hierarchical structures #OTHEREFR ].", "On the contrary, our approach does not necessarily rely on stochastic strategies, meaning that each run provides the same set of parameters.", "However, the likelihood of mixture models in general is multimodal, which is a problem for both approaches." ]
[ "MixNet", "model" ]
background
{ "title": "Strategies for online inference of model-based clustering in large and growing networks", "abstract": "In this paper we adapt online estimation strategies to perform model-based clustering on large networks. Our work focuses on two algorithms, the first based on the SAEM algorithm, and the second on variational methods. These two strategies are compared with existing approaches on simulated and real data. We use the method to decipher the connexion structure of the political websphere during the US political campaign in 2008. We show that our online EMbased algorithms offer a good trade-off between precision and speed, when estimating parameters for mixture distributions in the context of random graphs." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
0910.2034
0705.4485
A mixture model for networks.
The Bayesian setting has been a framework chosen by many authors, as it allows the integration of prior information and hierarchical structures #REFR ].
[ "Consequently, MixNet implicitly assumes the independence of X ij and X ji conditionally on the latent structure.", "As for the definition of the label variables, the Mixed Membership Stochastic Blockmodel (MMSB) has been proposed to describe the interactions between objects playing multiple roles #OTHEREFR ].", "Consequently, the hidden variables of their model can stand for more than one group for one node, whereas MixNet only considers one label per node. #OTHEREFR also model the sparsity of the network.", "This could be done in the context of MixNet by introducing a Dirac mass on zero for the conditional distribution of edges.", "Differences among approaches also concern the statistical framework that defines subsequent optimization strategies." ]
[ "On the contrary, our approach does not necessarily rely on stochastic strategies, meaning that each run provides the same set of parameters.", "However, the likelihood of mixture models in general is multimodal, which is a problem for both approaches.", "In MCMC procedures it leads to potential label switching issues, and the variational EM may converge to local maxima.", "As the model and the statistical frameworks are different, clustering results are likely to be very different as well.", "In order to illustrate our point, we deviate from the political blog data and we use the small data set of #OTHEREFR which is used in #OTHEREFR ." ]
[ "hierarchical structures", "prior information" ]
background
{ "title": "Strategies for online inference of model-based clustering in large and growing networks", "abstract": "In this paper we adapt online estimation strategies to perform model-based clustering on large networks. Our work focuses on two algorithms, the first based on the SAEM algorithm, and the second on variational methods. These two strategies are compared with existing approaches on simulated and real data. We use the method to decipher the connexion structure of the political websphere during the US political campaign in 2008. We show that our online EMbased algorithms offer a good trade-off between precision and speed, when estimating parameters for mixture distributions in the context of random graphs." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1410.1940
0705.4485
Model Specification
Our model unifies the ideas from both the Mixture Membership Stochastic Block (MMSB) model #REFR and the Latent Dirichlet Allocation (LDA) model [4] .
[ "We assume the pair-wise link Yp,q between person p and person q depends on the group identities of both p and q with the parameter B.", "Furthermore, we model the dependency between the group and the role using a multinomial distribution parameterized by a set of role mixture rate {θ1:M }.", "The role mixture rate characterizes the constitution of the group: the proportion of the population that plays the same role in the group.", "Finally, we model the activity feature vector of the individual Xp as the dependent variable of his role with parameter set {β1:K }. Table 1 summarizes the notations used in our model.", "We specify the generative process of the GLAD model in Algorithm 1." ]
[ "As shown in Figure 1 , the blue dashed rectangular on the left side resembles MMSB which models the formation of groups using link information.", "The red dashed polygon integrates the generating process of LDA which is often used for topic extraction from documents.", "By assuming mixture of groups and roles, we allow each person to have multiple roles and multiple group memberships.", "Without loss of generality, we assume that the activity data have discrete values and choose to model Xp with a multinomial distribution.", "When the activity data are distributed in other forms, we can easily adapt GLAD to model other type of Xp." ]
[ "Latent Dirichlet Allocation" ]
method
{ "title": "GLAD: group anomaly detection in social media analysis", "abstract": "Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this paper, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pair-wise and pointwise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1402.4033
0705.4485
FRIENDSHIP PREDICTION ACROSS MUL-TIPLE SOCIAL NETWORKS
In the following, we firstly describe the foundation of ComFP, Mixed Membership Stochastic Blockmodels (MMSB) #REFR , and then present the details of the composite modeling.
[ "We propose a novel mixed membership model, Composite Friendship Prediction (ComFP), to predict the friendships across multiple social networks collectively." ]
[ "Finally, we propose an efficient Gibbs sampling method to infer the latent variables.", "To help understand the motivation of the proposed model, we visualize two sub networks from Tencent and Douban in Figure 1 .", "The Tencent's networks contain users who are 2-hops away from the first author of the paper, as well as their relationships in Tencent's instant messaging network QQ and Microblog network.", "The Douban's networks contain a random subset of users from the crawled dataset and their online and offline relationships.", "Obviously, the community structures and users' memberships to communities in different networks can be quite different." ]
[ "Mixed Membership Stochastic" ]
method
{ "title": "Friendship Prediction in Composite Social Networks", "abstract": "Friendship prediction is an important task in social network analysis (SNA). It can help users identify friends and improve their level of activity. Most previous approaches predict users' friendship based on their historical records, such as their existing friendship, social interactions, etc. However, in reality, most users have limited friends in a single network, and the data can be very sparse. The sparsity problem causes existing methods to overfit the rare observations and suffer from serious performance degradation. This is particularly true when a new social network just starts to form. We observe that many of today's social networks are \"composite\" in nature, where people are often engaged in multiple networks. In addition, users' friendships are always correlated, for example, they are both friends on Facebook and Google+. Thus, by considering those overlapping users as the bridge, the friendship knowledge in other networks can help predict their friendships in the current network. This can be achieved by exploiting the knowledge in different networks in a collective manner. However, as each individual network has its own properties that can be incompatible and inconsistent with other networks, the naive merging of all networks into a single one may not work well. The proposed solution is to extract the common behaviors between different networks via a hierarchical Bayesian model. It captures the common knowledge across networks, while avoiding negative impacts due to network differences. Empirical studies demonstrate that the proposed approach improves the mean average precision of friendship prediction over state-of-theart baselines on nine real-world social networking datasets significantly." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1402.4033
0705.4485
RELATED WORKS
The main idea is to represent each entity as a mixed membership vector over communities or topics. MMSB #REFR is one of the basic models.
[ "Cross-domain collaborative filtering is also introduced #OTHEREFR , where hierarchical Bayesian models are proposed to solve multiple domain user personalization.", "Knowledge in multiple social networks is exploited to help predict users' behaviors #OTHEREFR recently.", "However, these approaches are applied on user-item interaction networks instead of social networks. In addition, traditional transfer learning focuses on borrowing knowledge.", "But in this paper, knowledge in each single network is incomplete, thus we need to consolidate pieces of knowledge from multiple networks instead of simply borrowing.", "Mixed Membership Models Recently, mixed membership models have been demonstrated to be effective to model relational data, such as LDA #OTHEREFR and MMSB #OTHEREFR ." ]
[ "It aims to identify (i) the mixed membership mapping of users to a fixed number of communities, K, and (ii) the pairwise relations among the community.", "Then, MMSB draws links randomly between users according to the probability determined by the mixed membership and the community relations. Recently, MMSB has been extended from different aspects.", "For example, a hierarchical extension is proposed in #OTHEREFR to utilize user features; dynamical factors are introduced in #OTHEREFR ] to model temporal information; and nonparametric/infinite modeling is proposed in #OTHEREFR , in order to release the constraints on the number of communities. However, these approaches focus on single networks.", "When the data are sparse, they fail to model the mixed membership correctly due to the lack of knowledge." ]
[ "communities", "mixed membership vector" ]
background
{ "title": "Friendship Prediction in Composite Social Networks", "abstract": "Friendship prediction is an important task in social network analysis (SNA). It can help users identify friends and improve their level of activity. Most previous approaches predict users' friendship based on their historical records, such as their existing friendship, social interactions, etc. However, in reality, most users have limited friends in a single network, and the data can be very sparse. The sparsity problem causes existing methods to overfit the rare observations and suffer from serious performance degradation. This is particularly true when a new social network just starts to form. We observe that many of today's social networks are \"composite\" in nature, where people are often engaged in multiple networks. In addition, users' friendships are always correlated, for example, they are both friends on Facebook and Google+. Thus, by considering those overlapping users as the bridge, the friendship knowledge in other networks can help predict their friendships in the current network. This can be achieved by exploiting the knowledge in different networks in a collective manner. However, as each individual network has its own properties that can be incompatible and inconsistent with other networks, the naive merging of all networks into a single one may not work well. The proposed solution is to extract the common behaviors between different networks via a hierarchical Bayesian model. It captures the common knowledge across networks, while avoiding negative impacts due to network differences. Empirical studies demonstrate that the proposed approach improves the mean average precision of friendship prediction over state-of-theart baselines on nine real-world social networking datasets significantly." }
{ "title": "Mixed membership stochastic blockmodels", "abstract": "Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks." }
1412.4690
1409.7842
Introduction
It was also written to promote understanding of the model discovery mechanisms of MGGP and to allow researchers to add their own custom implementations of code to use MGGP in other non-regression contexts (e.g. #REFR ).
[ "This gives the user choice and the ability to gain understanding of the system being modelled by examination of the model library.", "Note that the 'human related' factors mentioned above -such as interpretation and deployment of models -are especially important when dealing with data obtained from highly multivariate non-linear systems of unknown structure #OTHEREFR for which traditional analysis tends to be difficult or intractable.", "Hence, symbolic regression (and symbolic data mining in general) has many features that make it an attractive basis for inducing simple, interpretable and deployable models from data where the 'true' underlying relationships are high dimensional and largely unknown.", "However -there has been a relative paucity of software that allows researchers to actually do symbolic data mining -and in many cases the existing software is either expensive, proprietary and closed source or requires a high degree of expertise in software configuration and machine learning to use it effectively.", "GPTIPS (an acronym for Genetic Programming Toolbox for the Identification of Physical Systems) was written to reduce the technical barriers to using symbolic data mining and to help researchers, who are not necessarily experts in computing science or machine learning, to build and deploy symbolic models in their fields of research." ]
[ "To this end, it was written as a free (subject to the GNU public software license, GPL v3), open source project in MATLAB.", "The use of MATLAB as the underlying platform confers the following benefits:", " Robust, trustable, fast and automatically multi-threaded implementations of many matrix and vector math algorithms (these are used extensively in GPTIPS).", " Widely taught at the undergraduate level and beyond at educational institutes around the world and hence is familiar (and site licensed) to a diverse array of students, researchers and other technical professionals.", "It is also heavily used in many commercial, technical and engineering environments." ]
[ "model discovery mechanisms" ]
method
{ "title": "GPTIPS 2: an open-source software platform for symbolic data mining", "abstract": "GPTIPS is a free, open source MATLAB based software platform for symbolic data mining (SDM). It uses a 'multigene' variant of the biologically inspired machine learning method of genetic programming (MGGP) as the engine that drives the automatic model discovery process. Symbolic data mining is the process of extracting hidden, meaningful relationships from data in the form of symbolic equations. In contrast to other data-mining methods, the structural transparency of the generated predictive equations can give new insights into the physical systems or processes that generated the data. Furthermore -this transparency makes the models very easy to deploy outside of MATLAB. The rationale behind GPTIPS is to reduce the technical barriers to using, understanding, visualising and deploying GP based symbolic models of data, whilst at the same time remaining highly customisable and delivering robust numerical performance for 'power users'. In this chapter, notable new features of the latest version of the software -GPTIPS 2 -are discussed with these aims in mind. Additionally, a simplified variant of the MGGP high level gene crossover mechanism is proposed. It is demonstrated that the new functionality of GPTIPS 2 (a) facilitates the discovery of compact symbolic relationships from data using multiple approaches, e.g. using novel 'gene-centric' visualisation analysis to mitigate horizontal bloat and reduce complexity in multigene symbolic regression models (b) provides numerous methods for visualising the properties of symbolic models (c) emphasises the generation of graphically navigable 'libraries' of models that are optimal in terms of the Pareto trade off surface of model performance and complexity and (d) expedites 'real world' applications by the simple, rapid and robust deployment of symbolic models outside the software environment they were developed in." }
{ "title": "When Darwin meets Lorenz: Evolving new chaotic attractors through genetic programming", "abstract": "In this paper, we propose a novel methodology for automatically finding new chaotic attractors through a computational intelligence technique known as multi-gene genetic programming (MGGP). We apply this technique to the case of the Lorenz attractor and evolve several new chaotic attractors based on the basic Lorenz template. The MGGP algorithm automatically finds new nonlinear expressions for the different state variables starting from the original Lorenz system. The Lyapunov exponents of each of the attractors are calculated numerically based on the time series of the state variables using time delay embedding techniques. The MGGP algorithm tries to search the functional space of the attractors by aiming to maximise the largest Lyapunov exponent (LLE) of the evolved attractors. To demonstrate the potential of the proposed methodology, we report over one hundred new chaotic attractor structures along with their parameters, which are evolved from just the Lorenz system alone." }
1909.01531
1612.07766
VII. RELATED WORK
Teechan #REFR is an off-chain payment micropayment channel that harnesses TEE to increase transaction throughput of Bitcoin. Bentov et al.
[ "However, similar to ConcurORAM, the implementation of TaoStore is limited to the non-recursive construction of Path ORAM which is not suitable when combining with TEE with limited trusted memory capacity.", "This work aims to design a simpler design that is suitable for any flavor of tree-based ORAM schemes.", "TEE for cryptocurrencies.", "The research community has investigated different ways of combining TEE with blockchain to both improve privacy and scalability of blockchains.", "Obscuro #OTHEREFR is a Bitcoin transaction mixer implemented in Intel SGX that addresses the linkability issue of Bitcoin transactions." ]
[ "proposed a new design that uses Intel SGX to build a real-time cryptocurrency exchange.", "Another example is the Towncrier system #OTHEREFR that uses TEE for securely transferring data to smart contract.", "Another prominent example is Ekiden #OTHEREFR which proposed off-chain smart contract execution using TEE.", "Finally, ZLite #OTHEREFR system is another example which used ORAM and TEE to provide SPV clients with oblivious access.", "However, similar to BITE, ZLite employed non-recursive PATH-ORAM as it is, and thus, the scalability and efficiency of the system is inherently limited due to the non-concurrent accesses." ]
[ "Bitcoin" ]
background
{ "title": "A Tale of Two Trees: One Writes, and Other Reads. {Optimized Oblivious Accesses to Large-Scale Blockchains}", "abstract": "The Bitcoin network has offered a new way of securely performing financial transactions over the insecure network. Nevertheless, this ability comes with the cost of storing a large (distributed) ledger, which has become unsuitable for personal devices of any kind. Although the simplified payment verification (SPV) clients can address this storage issue, a Bitcoin SPV client has to rely on other Bitcoin nodes to obtain its transaction history and the current approaches offer no privacy guarantees to the SPV clients. This work presents T 3 , a trusted hardware-secured Bitcoin full client that supports efficient and scalable oblivious search/update for Bitcoin SPV clients without sacrificing the privacy of the clients. In this design, we leverage the trusted execution and attestation capabilities of a trusted execution environment (TEE) and the ability to hide access patterns of oblivious random access memory (ORAM) to protect SPV clients' requests from a potentially malicious server. The key novelty of T 3 lies in the optimizations introduced to conventional oblivious random access memory (ORAM), tailored for expected SPV client usages. In particular, by making a natural assumption about the access patterns of SPV clients, we are able to propose a two-tree ORAM construction that overcomes the concurrency limitation associated with traditional ORAMs. We have implemented and tested our system using the current Bitcoin Unspent Transaction Output database. Our experiment shows that the system is highly efficient in practice while providing strong privacy and security guarantees to Bitcoin SPV clients." }
{ "title": "Teechan: Payment Channels Using Trusted Execution Environments", "abstract": "Abstract. Blockchain protocols are inherently limited in transaction throughput and latency. Recent efforts to address performance and scale blockchains have focused on off-chain payment channels. While such channels can achieve low latency and high throughput, deploying them securely on top of the Bitcoin blockchain has been difficult, partly because building a secure implementation requires changes to the underlying protocol and the ecosystem. We present Teechan, a full-duplex payment channel framework that exploits trusted execution environments. Teechan can be deployed securely on the existing Bitcoin blockchain without having to modify the protocol. It: (i) achieves a higher transaction throughput and lower transaction latency than prior solutions; (ii) enables unlimited full-duplex payments as long as the balance does not exceed the channel's credit; (iii) requires only a single message to be sent per payment in any direction; and (iv) places at most two transactions on the blockchain under any execution scenario. We have built and deployed the Teechan framework using Intel SGX on the Bitcoin network. Our experiments show that, not counting network latencies, Teechan can achieve 2,480 transactions per second on a single channel, with submillisecond latencies." }
2001.00221
1104.3097
Main results
In #REFR all of optimal (n − 2)−colorings of the Kneser graph KG(n, 2) are characterized as follows.
[ ", n} \\ {i} and note that in each color class there exists (at least) one vertex which contains the symbol i and hence, is not adjacent to the vertex ij.", "This means that the vertex ij can not be adjacent to all of the vertices of a color class and thus f is not a T DC.", "Now, suppose that the symbol i appears in exactly k − 1 color classes and there exist a color class V j such that i does not appear in it.", "Let i ′ j ′ be a vertex in V j .", "Note that in this case the vertex ii ′ can not be adjacent to all of vertices of a color class which means that f is not a TDC and this completes the proof." ]
[ "Theorem 2.3 #OTHEREFR In every proper (n − 2)−coloring of the Kneser graph KG(n, 2), n ≥ 5, there exists a unique triangular color class.", "Furthermore, if c is a proper (n − 2)−coloring of KG(n, 2), then by renaming the symbols 1, 2, . . .", ", n, if it is necessary, we have the color classes F 1 , F 2 , . . . , F n−2 with the following properties.", "(a) F n−2 = {n(n − 1), n(n − 2), (n − 1)(n − 2)}, i.e. F n−2 is triangular;", "Now the following result is obtained." ]
[ "Kneser graph KG(n" ]
background
{ "title": "Total dominator chromatic number of Kneser graphs", "abstract": "Decomposition into special substructures inheriting significant properties is an important method for the investigation of some mathematical structures. A total dominator coloring (briefly, a TDC) of a graph G is a proper coloring (i.e. a partition of the vertex set V (G) into independent subsets named color classes) in which each vertex of the graph is adjacent to all of vertices of some color class. The total dominator chromatic number χ td (G) of G is the minimum number of color classes in a TDC of G. In this paper among some other results and by using the existance of Steiner triple systems, we determine the total dominator chromatic number of the Kneser graph KG(n, 2) for each n ≥ 5." }
{ "title": "On the locating chromatic number of Kneser graphs", "abstract": "Let c be a proper k-coloring of a connected graph G and Π = (C 1 , C 2 , . . . , C k ) be an ordered partition of V (G) into the resulting color classes. For a vertex v of G, the color code of v with respect to Π is defined to be the ordered k-tuple c In this paper, we study the locating chromatic number of Kneser graphs. First, among some other results, we show that χ L (KG(n, 2)) = n − 1 for all n ≥ 5. Then, we prove that χ L (KG(n, k)) ≤ n − 1, when n ≥ k 2 . Moreover, we present some bounds for the locating chromatic number of odd graphs." }
2001.00221
1104.3097
Main results
Theorem 2.3 #REFR In every proper (n − 2)−coloring of the Kneser graph KG(n, 2), n ≥ 5, there exists a unique triangular color class.
[ "This means that the vertex ij can not be adjacent to all of the vertices of a color class and thus f is not a T DC.", "Now, suppose that the symbol i appears in exactly k − 1 color classes and there exist a color class V j such that i does not appear in it.", "Let i ′ j ′ be a vertex in V j .", "Note that in this case the vertex ii ′ can not be adjacent to all of vertices of a color class which means that f is not a TDC and this completes the proof.", "In #OTHEREFR all of optimal (n − 2)−colorings of the Kneser graph KG(n, 2) are characterized as follows." ]
[ "Furthermore, if c is a proper (n − 2)−coloring of KG(n, 2), then by renaming the symbols 1, 2, . . .", ", n, if it is necessary, we have the color classes F 1 , F 2 , . . . , F n−2 with the following properties.", "(a) F n−2 = {n(n − 1), n(n − 2), (n − 1)(n − 2)}, i.e. F n−2 is triangular;", "Now the following result is obtained.", "Corollary 2.4 For each integer n ≥ 5, we have n − 1 ≤ χ td (KG(n, 2))." ]
[ "unique triangular color", "Kneser graph KG(n" ]
background
{ "title": "Total dominator chromatic number of Kneser graphs", "abstract": "Decomposition into special substructures inheriting significant properties is an important method for the investigation of some mathematical structures. A total dominator coloring (briefly, a TDC) of a graph G is a proper coloring (i.e. a partition of the vertex set V (G) into independent subsets named color classes) in which each vertex of the graph is adjacent to all of vertices of some color class. The total dominator chromatic number χ td (G) of G is the minimum number of color classes in a TDC of G. In this paper among some other results and by using the existance of Steiner triple systems, we determine the total dominator chromatic number of the Kneser graph KG(n, 2) for each n ≥ 5." }
{ "title": "On the locating chromatic number of Kneser graphs", "abstract": "Let c be a proper k-coloring of a connected graph G and Π = (C 1 , C 2 , . . . , C k ) be an ordered partition of V (G) into the resulting color classes. For a vertex v of G, the color code of v with respect to Π is defined to be the ordered k-tuple c In this paper, we study the locating chromatic number of Kneser graphs. First, among some other results, we show that χ L (KG(n, 2)) = n − 1 for all n ≥ 5. Then, we prove that χ L (KG(n, k)) ≤ n − 1, when n ≥ k 2 . Moreover, we present some bounds for the locating chromatic number of odd graphs." }
1710.06900
1611.01708
Discussion
This integration would make it easy to query mutual information between time series #REFR , identify data that is unlikely under the model, and make the method accessible to a broader audience.
[ "This paper has presented the temporally-reweighted CRP mixture, a domain-general nonparametric Bayesian method for multivariate time series.", "Experiments show strong quantitative and qualitative results on multiple real-world multivariate data analysis tasks, using little to no custom modeling.", "For certain application domains, however, predictive performance may improve by extending the model to include custom knowledge such as time-varying functionals.", "Further avenues for research include guidelines for selecting the window size; greater empirical validation; a stick breaking representation; improving inference scalability; and establishing theoretical conditions for posterior consistency.", "Also, it could be fruitful to integrate this method into a probabilistic programming platform #OTHEREFR , such as BayesDB." ]
[]
[ "time series", "mutual information" ]
method
{ "title": "Temporally-Reweighted Chinese Restaurant Process Mixtures for Clustering, Imputing, and Forecasting Multivariate Time Series", "abstract": "This article proposes a Bayesian nonparametric method for forecasting, imputation, and clustering in sparsely observed, multivariate time series data. The method is appropriate for jointly modeling hundreds of time series with widely varying, non-stationary dynamics. Given a collection of N time series, the Bayesian model first partitions them into independent clusters using a Chinese restaurant process prior. Within a cluster, all time series are modeled jointly using a novel \"temporally-reweighted\" extension of the Chinese restaurant process mixture. Markov chain Monte Carlo techniques are used to obtain samples from the posterior distribution, which are then used to form predictive inferences. We apply the technique to challenging forecasting and imputation tasks using seasonal flu data from the US Center for Disease Control and Prevention, demonstrating superior forecasting accuracy and competitive imputation accuracy as compared to multiple widely used baselines. We further show that the model discovers interpretable clusters in datasets with hundreds of time series, using macroeconomic data from the Gapminder Foundation." }
{ "title": "Detecting Dependencies in Sparse, Multivariate Databases Using Probabilistic Programming and Non-parametric Bayes", "abstract": "Datasets with hundreds of variables and many missing values are commonplace. In this setting, it is both statistically and computationally challenging to detect true predictive relationships between variables and also to suppress false positives. This paper proposes an approach that combines probabilistic programming, information theory, and non-parametric Bayes. It shows how to use Bayesian non-parametric modeling to (i) build an ensemble of joint probability models for all the variables; (ii) efficiently detect marginal independencies; and (iii) estimate the conditional mutual information between arbitrary subsets of variables, subject to a broad class of constraints. Users can access these capabilities using BayesDB, a probabilistic programming platform for probabilistic data analysis, by writing queries in a simple, SQL-like language. This paper demonstrates empirically that the method can (i) detect context-specific (in)dependencies on challenging synthetic problems and (ii) yield improved sensitivity and specificity over baselines from statistics and machine learning, on a real-world database of over 300 sparsely observed indicators of macroeconomic development and public health." }