Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Phys Rev Lett. Author manuscript; available in PMC Aug 10, 2009.
Published in final edited form as:
PMCID: PMC2724184
NIHMSID: NIHMS128814

Bayesian Approach to Network Modularity

Abstract

We present an efficient, principled, and interpretable technique for inferring module assignments and for identifying the optimal number of modules in a given network. We show how several existing methods for finding modules can be described as variant, special, or limiting cases of our work, and how the method overcomes the resolution limit problem, accurately recovering the true number of modules. Our approach is based on Bayesian methods for model selection which have been used with success for almost a century, implemented using a variational technique developed only in the past decade. We apply the technique to synthetic and real networks and outline how the method naturally allows selection among competing models.

Large-scale networks describing complex interactions among a multitude of objects have found application in a wide array of fields, from biology to social science to information technology [1,2]. In these applications one often wishes to model networks, suppressing the complexity of the full description while retaining relevant information about the structure of the interactions [3]. One such network model groups nodes into modules, or “communities,” with different densities of intra- and interconnectivity for nodes in the same or different modules. We present here a computationally efficient Bayesian framework for inferring the number of modules, model parameters, and module assignments for such a model.

The problem of finding modules in networks (or “community detection”) has received much attention in the physics literature, wherein many approaches [4,5] focus on optimizing an energy-based cost function with fixed parameters over possible assignments of nodes into modules. The particular cost functions vary, but most compare a given node partitioning to an implicit null model, the two most popular being the configuration model and a limited version of the stochastic block model (SBM) [6,7]. While much effort has gone into how to optimize these cost functions, less attention has been paid to what is to be optimized. In recent studies which emphasize the importance of the latter question it was shown that there are inherent problems with existing approaches regardless of how optimization is performed, wherein parameter choice sets a lower limit on the size of detected modules, referred to as the “resolution limit” problem [8,9]. We extend recent probabilistic treatments of modular networks [10,11] to develop a solution to this problem that relies on inferring distributions over the model parameters, as opposed to asserting parameter values a priori, to determine the modular structure of a given network. The developed techniques are principled, interpretable, computationally efficient, and can be shown to generalize several previous studies on module detection.

We specify an N-node network by its adjacency matrix A, where Aij = 1 if there is an edge between nodes i and j and Aij = 0 otherwise, and define σi [set membership] {1,…, K} to be the unobserved module membership of the ith node. We use a constrained SBM, which consists of a multinomial distribution over module assignments with weights πμ [equivalent] p{σi = μ|[pi vector]) and Bernoulli distributions over edges contained within and between modules with weights [theta]c [equivalent] p(Aij = 1 |σi = σj, ϑ⃗) and [theta]d [equivalent] p(Aij = 1|σiσj, ϑ⃗), respectively. In short, to generate a random undirected graph under this model we roll a K-sided die (biased by [pi vector]) N times to determine module assignments for each of the N nodes; we then flip one of two biased coins (for either intra- or intermodule connection, biased by [theta]c, [theta]d, respectively) for each of the N(N − 1)/2 pairs of nodes to determine if the pair is connected. The extension to directed graphs is straightforward.

Using this model, we write the joint probability p(A, [sigma with right arrow above]|[pi vector], ϑ⃗, K) = p(A|[sigma with right arrow above], ϑ⃗) p([sigma with right arrow above]|[pi vector]) (conditional dependence on K has been suppressed below for brevity) as

p(A,σπ,ϑ)=ϑcc+(1ϑc)cϑdd+(1ϑd)dμ=1Kπμnμ,
(1)

where c+ [equivalent] Σi>jAij δσi, σj is the number of edges contained within communities, c [equivalent] Σi>j (1 − Aij)δσi, σj is the number of nonedges contained within communities, d+ [equivalent] Σi>jAij (1 − δσi, σj) is the number of edges between different communities, d [equivalent] Σi>j (1 − Aij) (1 − δσi, σj) is the number of nonedges between different communities, and nμi=1Nδσi,μ is the occupation number of the μth module. Defining H [equivalent] − lnp(A, [sigma with right arrow above]|[pi vector], ϑ⃗) and regrouping terms by local and global counts, we recover (up to additive constants) a generalized version of [10]:

H=i>j(JLAijJG)δσi,σj+μ=1Khμi=1Nδσi,μ,
(2)

a Potts model Hamiltonian with unknown coupling constants JG [equivalent] ln(1 − [theta]d)/(1 − [theta]c), JL [equivalent] ln[theta]c/[theta]d + JG, and chemical potentials hμ [equivalent] − lnπμ. (Note that many previous methods omit a chemical potential term, implicitly assuming equally sized groups.)

While previous approaches [4,10] minimize related Hamiltonians as a function of [sigma with right arrow above], these methods require that the user specifies values for these unknown constants, which gives rise to the resolution limit problem [8,9]. Our approach, however, uses a disorder-averaged calculation to infer distributions over these parameters, avoiding this issue. To do so, we take beta (B) and Dirichlet (An external file that holds a picture, illustration, etc.
Object name is nihms128814ig1.jpg) distributions over ϑ⃗ and [pi vector], respectively:

p(ϑ)p(π)B(ϑc;c+0,c0)B(ϑd;d+0,d0)D(π;n0).
(3)

These conjugate prior distributions are defined on the full range of ϑ⃗ and [pi vector], respectively, and their functional forms are preserved when integrated against the model to obtain updated parameter distributions. Their hyperparameters {c+0, c0, d+0, d0, n0} act as pseudocounts that augment observed edge counts and occupation numbers.

In this framework the problem of module detection can be stated as follows: given an adjacency matrix A, determine the most probable number of modules (i.e., occupied spin states) K* = argmaxK p(K|A) and infer posterior distributions over the model parameters (i.e., coupling constants and chemical potentials) p([pi vector], ϑ⃗|A) and the latent module assignments (i.e., spin states) p([sigma with right arrow above]|A). In the absence of a priori belief about the number of modules, we demand that p(K) is sufficiently weak that maximizing p(K|A) [proportional, variant] p(A|K) p(K) is equivalent to maximizing p(A|K), referred to as the evidence. This approach to model selection [12] proposed by the statistical physicist Jeffreys in 1935 [13] balances model fidelity and complexity to determine, in this context, the number of modules.

A more physically intuitive interpretation of the evidence is as the disorder-averaged partition function of a spin glass, calculated by marginalizing over the possible quenched values of the parameters ϑ⃗ and [pi vector] as well as the spin configurations [sigma with right arrow above]:

Z=p(AK)=σdϑdπp(A,σπ,ϑ)p(ϑ)p(ϑ)
(4)

=σdϑdπeHp(ϑ)p(π).
(5)

While the ϑ⃗ and [pi vector] integrals in Eq. (4) can be performed analytically, the remaining sum over module assignments [sigma with right arrow above] scales as KN and becomes computationally intractable for networks of even modest sizes. To accommodate large-scale networks we use a variational approach that is well known to the statistical physics community [14] and has recently found application in the statistics and machine learning literature, commonly termed variational Bayes (VB) [15]. We proceed by taking the negative logarithm of An external file that holds a picture, illustration, etc.
Object name is nihms128814ig2.jpg and using Gibbs’s inequality:

lnZ=lnσdϑdπq(σ,π,ϑ)p(A,σ,π,ϑK)q(σ,π,ϑ)σdϑdπq(σ,π,ϑ)lnp(A,σ,π,ϑK)q(σ,π,ϑ).
(6)

That is, we first multiply and divide by an arbitrary approximating distribution q([sigma with right arrow above], [pi vector], ϑ⃗) and then upper-bound the log of the expectation by the expectation of the log. We define the quantity to be minimized—the expression in Eq. (6)—as the variational free energy F{q; A}, a functional of q([sigma with right arrow above], [pi vector], ϑ⃗). (Note that the negative log of q([sigma with right arrow above], [pi vector], ϑ⃗) plays the role of a test Hamiltonian in variational approaches in statistical mechanics.)

We next choose a factorized approximating distribution q([sigma with right arrow above], [pi vector], ϑ⃗)=q[sigma with right arrow above] ([sigma with right arrow above]) q[pi vector] ([pi vector]) qϑ⃗(ϑ⃗) with q[pi vector] ([pi vector]) =An external file that holds a picture, illustration, etc.
Object name is nihms128814ig1.jpg([pi vector]; n) and qϑ⃗(ϑ⃗) = qc([theta]c)qd([theta]d) = B([theta]c; c+, c)B([theta]d; d+, d) as in mean field theory, we factorize q[sigma with right arrow above]([sigma with right arrow above]) as q(σi = μ) = Q, an N-by-K matrix which gives the probability that the ith node belongs to the μth module. Evaluating F{q; A} with this functional form for q([sigma with right arrow above], [pi vector], ϑ⃗) gives a function of the variational parameters {c+, c, d+, d, n} and matrix elements Q which can subsequently be minimized by taking the appropriate derivatives.

We summarize the resulting iterative algorithm, which provably converges to a local minimum of F{q; A} and provides controlled approximations to the evidence p(A|K) as well as the posteriors p([pi vector], ϑ⃗|A) and p([sigma with right arrow above]|A):

Initialization

Initialize the N-by-K matrix Q = Q0 and set pseudocounts c+ = c+0, c = c0, d+ = d+0, d = d0, ñμ = ñμ0.

Main loop

Until convergence in F{q; A}:

  1. Update the expected value of the coupling constants and chemical potentials
    JL=ψ(c+)ψ(c)ψ(d+)+ψ(d)
    (7)
    JG=ψ(d)ψ(d++d)ψ(c)+ψ(c++c)
    (8)
    hμ=ψ(μnμ)ψ(nμ),
    (9)
    where ψ(x) is the digamma function;
  2. Update the variational distribution over each spin σi
    Qiμexp{ji[JLAijJG]Qjμhμ}
    (10)
    normalized such that ΣμQ = 1, for all i;
  3. Update the variational distribution over parameters from the expected counts and pseudocounts
    nμ=nμ+nμ0=i=1NQiμ+nμ0
    (11)
    c+=c++c+0=12Tr(QTAQ)+c+0
    (12)
    c=c+c0=12Tr(QT(unTQ))c++c0
    (13)
    d+=d++d+0=Mc++d+0
    (14)
    d=d+d0=CMc+d0,
    (15)
    where C = N(N − 1)/2, Mi>jAij, and u is a N-by-1 vector of 1’s;
  4. Calculate the updated optimized free energy
    F{q;A}=lnZcZdZπZcZdZπ+μ=1Ki=1NQiμlnQiμ,
    (16)

where Zπ=B(n) is the beta function with a vector-valued argument, the partition function for the Dirichlet distribution q[pi vector]([pi vector]) [likewise for qc([theta]c), qd([theta]d)]. As this provably converges to a local optimum, VB is best implemented with multiple randomly chosen initializations of Q0 to find the global minimum of F{q; A}.

Convergence of the above algorithm provides the approximate posterior distributions q[sigma with right arrow above]([sigma with right arrow above]), q[pi vector]([pi vector]), and qϑ⃗(ϑ⃗) and simultaneously returns K*, the number of nonempty modules that maximizes the evidence. As such, one needs only to specify a maximum number of allowed modules and run VB; the probability of occupation for extraneous modules converges to zero as the algorithm runs and the most probable number of occupied modules remains.

This is significantly more accurate than other approximate methods, such as Bayesian information criterion (BIC) [16] and integrated classification likelihood (ICL) [17,18], and is less computationally expensive than empirical methods such as cross-validation (CV) [19,20] in which one must perform the associated procedure after fitting the model for each considered value of K. Specifically, BIC and ICL are suggested for single-peaked likelihood functions well approximated by Laplace integration and studied in the large-N limit. For a SBM the first assumption of a single-peaked function is invalidated by the underlying symmetries of the latent variables; i.e., nodes are distinguishable and modules indistinguishable. See Fig. 1 for a comparison of our method with the Girvan-Newman modularity [5] in the resolution limit test [8,9], where VB consistently identifies the correct number of modules. [Note that VB is both accurate and fast: it performs competitively in the ‘‘four groups’’ test [21] and scales as An external file that holds a picture, illustration, etc.
Object name is nihms128814ig3.jpg (MK). Runtime for the main loop in MATLAB on a 2 GHz laptop is ~6 min for N = 106 nodes with average degree 16 and K = 4.]

FIG. 1
(color). Results for the resolution limit test suggested in [8,9]. Shapes and colors correspond to the inferred modules. (Left) Our method, variational Bayes, in which all 15 modules are correctly identified (each clique is assigned a unique color/shape). ...

Furthermore, we note that previous methods in which parameter inference is performed by optimizing a likelihood function via expectation maximization (EM) [11,18] are also special cases of the framework presented here. EM is a limiting case of VB in which one collapses the distributions over parameters to point estimates at the mode of each distribution; however EM is prone to over-fitting and cannot be used to determine the appropriate number of modules, as the likelihood of observed data increases with the number of modules in the model. As such, VB performs at least as well as EM while simultaneously providing complexity control [22,23].

In addition to validating the method on synthetic networks, we apply VB to the 2000 NCAA American football schedule shown in Fig. 2 [24]. Each of the 115 nodes represents an individual team and each of the 613 edges represents a game played between the nodes joined. The algorithm correctly identifies the presence of the 12 conferences which comprise the schedule, where teams tend to play more games within than between conferences, making most modules assortative. Of the 115 teams, 105 teams are assigned to their corresponding conferences, with the majority of exceptions belonging to the frequently misclassified independent teams [25]—the only disassortative group in the network. We emphasize that, unlike other methods in which the number of conferences must be asserted, VB determines 12 as the most probable number of conferences automatically.

FIG. 2
(color). Each of the 115 nodes represents a NCAA team and each of the 613 edges a game played in 2000 between two teams it joins. The inferred module assignments (designated by color) on the football network which recover the 12 NCAA conferences (designated ...

Posing module detection as inference of a latent variable within a probabilistic model has a number of advantages. It clarifies what precisely is to be optimized and suggests a principled and efficient procedure for how to perform this optimization. Inferring distributions over model parameters reveals the natural scale of a given modular network, avoiding resolution limit problems. This method allows us to view a number of approaches to the problem by physicists, applied mathematicians, social scientists, and computer scientists as related subparts of a larger problem. In short, it suggests how a number of seemingly disparate methods may be recast and united. A second advantage of this work is its generalization to other models, including those designed to reveal structural features other than modularity. Finally, use of the evidence allows model selection not only among nested models, e.g., models differing only in the number of parameters, but even among models of different parametric families. The last strikes us as a natural area for progress in the statistical study of real-world networks.

Acknowledgments

It is a pleasure to acknowledge useful conversations on modeling with Joel Bader and Matthew Hastings, on Monte Carlo methods for Potts models with Jonathan Goodman, with David Blei on variational methods, and with Aaron Clauset for his feedback on this manuscript [26]. J. H. was supported by NIH No. 5PN2EY016586; C. W. was supported by NSF No. ECS-0425850 and NIH No. 1U54CA121852.

Footnotes

PACS numbers: 89.75.Hc, 02.50.–r, 02.50.Tt

Contributor Information

Jake M. Hofman, Department of Physics, Columbia University, New York, New York 10027, USA, ude.aibmuloc@5402hmj..

Chris H. Wiggins, Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027, USA, ude.aibmuloc@sniggiw.sirhc.

References

1. Albert R, Barabási AL. Rev Mod Phys. 2002;74:47.
2. Watts DJ, Strogatz SH. Nature (London) 1998;393:440. [PubMed]
3. Ziv E, Middendorf M, Wiggins CH. Phys Rev E. 2005;71:046117. [PubMed]
4. Reichardt J, Bornholdt S. Phys Rev E. 2006;74:016110. [PubMed]
5. Newman M, Girvan M. Phys Rev E. 2004;69:026113. [PubMed]
6. Holland P, Leinhardt S. Sociological Methodology. 1976;7:1.
7. McSherry F. Proceedings of the 42nd IEEE Symposium on the Foundations of Computer Science, 2001. IEEE; New York: 2001. pp. 529–537.
8. Kumpula J, Saramäki J, Kaski K, Kertész J. Eur Phys J B. 2007;56:41.
9. Fortunato S, Barthélemy M. Proc Natl Acad Sci USA. 2007;104:36. [PMC free article] [PubMed]
10. Hastings MB. Phys Rev E. 2006;74:035102(R) . [PubMed]
11. Newman MEJ, Leicht EA. Proc Natl Acad Sci USA. 2007;104:9564. [PMC free article] [PubMed]
12. Kass RE, Raftery AE. J Am Stat Assoc. 1995;90:773.
13. Jeffreys H. Proc Cambridge Philos Soc. 1935;31:203.
14. Feynman RP. Statistical Mechanics, A Set of Lectures. W. A. Benjamin; Reading, MA: 1972.
15. Jordan MI, Ghahramani Z, Jaakkola T, Saul LK. Mach Learn. 1999;37:183.
16. Schwarz G. Ann Stat. 1978;6:461.
17. Biernacki C, Celeux G, Govaert G. IEEE Trans Pattern Anal Mach Intell. 2000;22:719.
18. Ambroise C, Zanghi H, Miele V. Statistics for Systems Biology Group Research Report No. SSB-RR-8. 2007.
19. Airoldi EM, Blei DM, Fienberg SE, Xing EP. arXiv:0705.4485v1.
20. Stone M. J Royal Stat Soc. 1974;36:111.
21. Danon L, Díaz-Guilera A, Duch J, Arenas A. J Stat Mech. 2005:P09008.
22. Bishop CM. Pattern Recognition and Machine Learning. Springer; New York: 2006.
23. MacKay DJ. Information Theory, Inference, and Learning Algorithms. Cambridge University Press; Cambridge, U.K.: 2003.
24. Girvan M, Newman MEJ. Proc Natl Acad Sci USA. 2002;99:7821. [PMC free article] [PubMed]
25. Clauset A, Moore C, Newman MEJ. In: ICML 2006 Ws, Lecture Notes in Computer Science. Airoldi EM, editor. Springer-Verlag; Berlin: 2007.
26. Related software can be downloaded from http://vbmod.sourceforge.net.
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...