Entropy as a Robustness Marker in Genetic Regulatory Networks

Genetic regulatory networks have evolved by complexifying their control systems with numerous effectors (inhibitors and activators). That is, for example, the case for the double inhibition by microRNAs and circular RNAs, which introduce a ubiquitous double brake control reducing in general the number of attractors of the complex genetic networks (e.g., by destroying positive regulation circuits), in which complexity indices are the number of nodes, their connectivity, the number of strong connected components and the size of their interaction graph. The stability and robustness of the networks correspond to their ability to respectively recover from dynamical and structural disturbances the same asymptotic trajectories, and hence the same number and nature of their attractors. The complexity of the dynamics is quantified here using the notion of attractor entropy: it describes the way the invariant measure of the dynamics is spread over the state space. The stability (robustness) is characterized by the rate at which the system returns to its equilibrium trajectories (invariant measure) after a dynamical (structural) perturbation. The mathematical relationships between the indices of complexity, stability and robustness are presented in case of Markov chains related to threshold Boolean random regulatory networks updated with a Hopfield-like rule. The entropy of the invariant measure of a network as well as the Kolmogorov-Sinaï entropy of the Markov transition matrix ruling its random dynamics can be considered complexity, stability and robustness indices; and it is possible to exploit the links between these notions to characterize the resilience of a biological system with respect to endogenous or exogenous perturbations. The example of the genetic network controlling the kinin-kallikrein system involved in a pathology called angioedema shows the practical interest of the present approach of the complexity and robustness in two cases, its physiological normal and pathological, abnormal, dynamical behaviors.


Hopfield Dynamics
Let consider now discrete Hopfield dynamics [35], defined in a genetic regulatory network N having n genes, where x i (t) = 1, if the gene i is expressing its protein; else x i (t) = 0. The probability that x i (t) = 1, knowing the states at time t in a neighborhood V i of i, is given by: w ij is a parameter representing the influence or weight (inhibitory, if w ij < 0; activating, if w ij > 0 and null, if w ij = 0) the gene j from the neighborhood V i ={j; w ij 0} exerts on the gene i, and which corresponds to the notion of external field in statistical mechanics and here to the minimal value of activation for interaction potential: j∈V i w ij x j . The thermodynamic parameter, temperature T, allows for introducing a certain degree of randomness in the transition rule. In particular, if T = 0, the rule is practically deterministic: and where H is the Heaviside function.
Once this rule defined, three incidence matrices of important graphs characterizing the network dynamics can be defined: -Interaction matrix W: Its coefficient w ij corresponds to the action of the gene j on the gene i. W is analogue to a discrete Jacobian matrix. Signed Jacobian matrix A is defined as follows: Its associated directed graph is the interaction graph G; -Updating matrix S: s ij = 1, if j is updated before or with i, else s ij = 0; -Flow matrix Φ: φ bc = 1, where b and c belong to E, if and only if b = ϕ(c,1); else φ bc = 0.
The matrices involved in the network dynamics are constant or depend on time t [41,42]: -Concerning W, the dependence is called the Hebbian dynamics: if the two vectors {x i (s)} s<t and {x j (s)} s<t have a correlation coefficient j(t) 0, then the dependence is expressed through the equation: w ij (t + 1) = w ij (t) + h ij (t), with h > 0, corresponding to a reinforcement of the absolute value of interactions w ij (t) having succeeded to increase the xi(s)'s, if w ij (t) is positive, and conversely to decrease the x i (s)'s, if w ij (t) is negative. -Concerning S, the updating can be state dependent or not. If all s ij equal one, the updating schedule is called parallel; if there exists a sequence of indices of the n nodes of the network, i 1 , . . . , such as s ikik+1 = 1, the other s ij 's being equal to 0, the updating is called sequential. Between these two extreme schedules, the updating modes are called block-sequential, i.e., they are parallel in a block, the blocks being updated sequentially. A more realistic schedule in genetic and metabolic networks is called block parallel [50]: These networks are composed of blocks made of genes sequentially updated, these blocks being updated in parallel ( Figure 2 Top middle). Some interactions between these blocks can exist (i.e., there are w ij 0, with i and j belonging to 2 different blocks), but because the block sizes are different, the time at which the first attractor state is reached in a block is not necessarily synchronized with the corresponding times in other blocks: the synchronization expected as asymptotic behavior of the dynamics depends on the intra-block as well as on the inter-block interactions, which explains that states of genes in a block serving as the clock for the network are highly dependent on states of genes in other blocks connected to them (e.g., acting as transcription factors of the clock genes). -Concerning Φ, the transition operator is Markov, and hence autonomous in time. Figure 2 Top shows the three important graphs of the network dynamics; i.e., the graphs having as incidence matrices respectively, the interaction W (left), updating S (middle) and trajectory (right) matrices. If the dynamics are ruled by a deterministic Hopfield network, with temperature equal to 0 and interaction weights w ij , which are equal to 1, −1 or 0, their attractor is a limit-cycle given on Figure 2 (top right), showing two intricate rhythms, one of Period 4 corresponding to the inner clock dynamics, embedded in a rhythm of period 12, the rhythm of the whole network. Figure 2 (Bottom) shows other updating graphs, less realistic than the block-parallel for representing the clock action.

Hopfield Dynamics
Let c --Concerning Φ, the transition operator is Markov, and hence autonomous in time.

Figure 2.
Top left: interaction graph G of a network made of a 3-switch linked to a regulon representing a genetic clock. Top middle: the updating graph corresponding to block-parallel dynamics ruling the network. Top right: a part of the trajectory graph of the dynamics exhibiting a limit-cycle of period 12 having internally a cycle of period 4 for the clock. Bottom: updating graphs corresponding successively (from the left to the right) to the parallel, sequential and block-sequential dynamics.
If isochron basins have all the same size, then E m attractor = log2m. If not, E m attractor reflects the spatial heterogeneity of the asymptotic behavior of the network dynamics: the basins Bk can be large (when the flow φ is rapid inside) or small (when the flow φ is slow inside). In the example of the Wilson-Cowan oscillator with τx = τy = 1, if λ is growing from 0 to infinite, then E m attractor is increasing from 0 to log2m [75].

Interaction graph
Updating graphs Trajectory graph Figure 2. Top left: interaction graph G of a network made of a 3-switch linked to a regulon representing a genetic clock. Top middle: the updating graph corresponding to block-parallel dynamics ruling the network. Top right: a part of the trajectory graph of the dynamics exhibiting a limit-cycle of period 12 having internally a cycle of period 4 for the clock. Bottom: updating graphs corresponding successively (from the left to the right) to the parallel, sequential and block-sequential dynamics.

Attractor Entropy and Isochronal Entropy, Indices of Complexity and Synchronizability
The attractor entropy E attractor is a measure of heterogeneity of the attractor landscape on the state space E of a dynamical system. It is defined by the quantity: where ABRS(A k ) is equal to the attraction basin relative size of the kth attractor A k among the m attractors of the network dynamics, divided by 2 n , n being the number of genes of the network. Let consider the case of a continuous network, called the Wilson-Cowan oscillator, close to the Hopfield system if τ x = τ y = 1/ε, w xx = w xy = w yx = w yy = λ/ε, 0<ε<<1 [73], which describes the activities x and y of two populations of interacting genes as follows: The variables x and y represent the proteinogenic activities of two populations of genes, whose interaction is described through a quasi-sigmoid function parametrized by λ, controlling its stiffness and accounting for the response to a signal coming from the other gene (whose expressed protein acts as a transcription factor, inhibitor or activator, on the first gene). This response is highly non-linear when λ is large, becoming similar to a Hopfield-like response.
The parameters τ x and τ y refer to the influence power of the populations of genes x and y. If θ = 0, τ x = τ y = τ, the Wilson-Cowan oscillator presents a Hopf bifurcation when λτ crosses the value 1 [74], moving from dynamics with only one stable fixed point to dynamics with a repulsor at the origin of the state space E, surrounded by an attractor limit cycle C of period T. Let us consider now the point ϕ(O,h) of C reached after a time (or phase) h on C from a point chosen as origin O = ϕ(O,0) ( Figure 3): there is a bijective map b between C and the phase interval [0, T]. The isochron I h of phase h is the set of points x of B(C) such that d(ϕ(x,t), ϕ(h,t)) tends to 0, when t tends to infinity. I h extends the map b to all the points of B(C). Let denote ϕ h the flow ϕ restrained to I h xT d , where T d = {kT} k∈N , such as: and B(C) = ∪ h∈C I h The point h is an attractor for ϕ h , whose basin is I h \{h}. Let us decompose now the time interval The isochronal entropy of degree m is related to the m A k 's subsets of of C=∪ k=0,m−1 A k , each A k being the attractor for the dynamics ϕ k : If isochron basins have all the same size, then E m attractor = log 2 m. If not, E m attractor reflects the spatial heterogeneity of the asymptotic behavior of the network dynamics: the basins B k can be large (when the flow ϕ is rapid inside) or small (when the flow ϕ is slow inside). In the example of the Wilson-Cowan oscillator with τ x = τ y = 1, if λ is growing from 0 to infinite, then E m attractor is increasing from 0 to log 2 m [75]. Figure 3. Isochrons and maximum phase shift of the Wilson-Cowan oscillator. In the phase space (x0y) are 30 isochrons and the limit cycle C (in red) of a Wilson-Cowan oscillator (with τx = τy = τ = 1, and λ = 1.1). The oscillation period is 2π in the vicinity of its Hopf bifurcation; the limit cycle C is close to the unit circle; and isochrons are in the form of spirals. Thumbnail Bottom Right: the maximum phase shift Δ between phases before and after perturbation is represented along the z axis with a color code (between 0 and 2π). The profile of Δ indicates how a population of Wilson-Cowan oscillators synchronizes following an instantaneous translation τ, and the degree of synchronization (inverse of Δ) depends on the intensity of the translation.
Let us fix now the values of the parameters: τx = τy = τ = 1 and λ = 1.1, close to the bifurcation in the parameter space (λτ ≈ 1) and consider the isochron landscape of the Wilson-Cowan oscillator ∆ y x In the phase space (x0y) are 30 isochrons and the limit cycle C (in red) of a Wilson-Cowan oscillator (with τ x = τ y = τ = 1, and λ = 1.1). The oscillation period is 2π in the vicinity of its Hopf bifurcation; the limit cycle C is close to the unit circle; and isochrons are in the form of spirals. Thumbnail Bottom Right: representation of the maximum phase shift along the z axis with a color code (between 0 and 2π). The profile of maximum phase shift indicates how a population of Wilson-Cowan oscillators synchronizes following an instantaneous translation τ, and the degree of synchronization (inverse of this maximum phase shift observed after perturbation) is proportional to the intensity of the translation.
Let us fix now the values of the parameters: τ x = τ y = τ = 1 and λ = 1.1, close to the bifurcation in the parameter space (λτ ≈ 1) and consider the isochron landscape of the Wilson-Cowan oscillator ( Figure 3). The maximum phase shift observed after an instantaneous translation of the limit cycle of the Wilson-Cowan oscillator is inversely proportional to the intensity of this perturbation (Figure 3 Thumbnail), which means that a population of oscillators must be perturbed by a sufficiently intense stimulus to be synchronized. Indeed, isochrons form spirals, which diverge from one another far from the limit cycle and the synchronization, are observed if the translation of the limit cycle falls between two isochrons having close phases f 1 and f 2 . These isochrons can be reached by translating the limit cycle C sufficiently far from its initial position [75][76][77]: it is possible if attraction basins are sufficiently wide, and hence, if the isochronal entropy E m attractor is sufficiently large. Hence, if E attractor can serve as index of complexity of attractor landscape, E m attractor can serve more precisely as index of synchronizability, which is particularly interesting in the study of biological clocks and the entrainment of biological rhythms.

Energy, Frustration and Dynamic Entropy
We define first the functions energy U and frustration F for a genetic network N with n genes in interaction [1,2,[15][16][17][18][19][20][21][22][23][36][37][38][39][40][41][42][43][44]78]: where x is a configuration of gene expression (x i = 1, if the gene i is expressed, and x i = 0, if not), E denotes the set of all configurations of gene expression; that is, for a Boolean network, the hypercube {0,1} n , and α ij = sign(w ij ) is the sign of the interaction weight w ij , which quantifies the influence of the gene j on the gene i: α ij = −1 (with respect to +1), if j is an inhibitor (with respect to activator) of the expression of i, and α ij = 0, if j exerts no influence on i. Q + (N) is equal to the number of positive edges of the interaction graph G of the network N having n genes, whose incidence matrix is A = (α ij ) i,j = 1,n . F(x) denotes the global frustration of x; i.e., the number of pairs of genes (i,j) for which state values x i and x j are contradictory with the sign α ij of the influence of j on i: where F ij is the local frustration of the pair (i,j) defined by: We choose for the dynamics of the network the Hopfield rule (1). It corresponds to a Markov operator defined on the state space E provided with a sequential updating schedule for the expression of the genes in a predefined order [1,2]. Let M = (M xy ) denote its Markov matrix, giving the transition probability (defined by the rule (1) and this sequential updating mode) to go from the configuration x to the configuration y in E, and µ = {µ x = µ({x})} x∈E denote its stationary distribution on E. The dynamic entropy E is then defined as the Kolmogorov-Sinaï entropy of M by the classical formula: In the sequential updating mode defined by the order of the nodes equal to the integer order on N={1,n}, by denoting I = {1, . . . ,i − 1}, N\I = {i, . . . ,n} and X (with respect to Y) the set of the indices i such that x i = 1 (with respect to y i = 1), we have for the general coefficient of the transition matrix M: and for the invariant measure, the classical Gibbs measure µ [1,2]: When T = 0, the invariant measure µ is concentrated on the m (≤2 n ) deterministic attractors A k of size a k of the network dynamics (e.g., two fixed points and a limit cycle on Figure 4 Top) and we have: because each line of M is reduced to one coefficient equal to one, the others being equal to 0. 5 ( Figure 3). The maximum phase shift observed after an instantaneous translation of the limit cycle of the Wilson-Cowan oscillator is inversely proportional to the intensity of this perturbation (Figure 3 Thumbnail), which means that a population of oscillators must be perturbed by a sufficiently intense stimulus to be synchronized. Indeed, isochrons form spirals, which diverge from one another far from the limit cycle and the synchronization, are observed if the translation of the limit cycle falls between two isochrons having close phases f1 and f2. These isochrons can be reached by translating the limit cycle C sufficiently far from its initial position [75][76][77]: it is possible if attraction basins are sufficiently wide, and hence, if the isochronal entropy E m attractor is sufficiently large. Hence, if Eattractor can serve as index of complexity of attractor landscape, E m attractor can serve more precisely as index of synchronizability, which is particularly interesting in the study of biological clocks and the entrainment of biological rhythms. The Proposition 1 still holds if the network is a circuit whose transition functions are Boolean identity or negation, a circuit on which it is easy to calculate the global frustration F and to show that it characterizes attractors by remaining constant along them and decreasing inside their attraction basin like in discrete gradient dynamics ( Figure 5 Left) [84].  When +∞ > T > 0, if µ becomes scattered uniformly on the attraction basins of the m deterministic attractors supposed to be fixed points of the network dynamics (cf. Figure 4 Middle), they are transiently the final classes of the Markov transition matrix related to the dynamics (9), and are denoted A k of size card(A k ) = a k . Then we have, when this scattering is observed, the approximate formula: which can be written as:

Energy, Frustration and Dynamic Entropy
where When T tends to +∞, µ tends to be scattered uniformly over E (cf. Figure 4 Bottom) and E = log 2 2 n = n. More generally, we have: where E v and E µ denote, respectively, the entropy of the joint measure ν = µ x M xy on E 2 and the entropy of the invariant measure µ, which can be estimated by E attractor , if attractors are fixed points and by E m attractor if the attractor is a limit-cycle. When T = +∞, and v and µ are scattered uniformly, respectively, on E 2 and E, and we have: E = log 2 ((2 n ) 2 )−log 2 2 n = n.
When T = 0, if v is concentrated on a set of states in E 2 and µ is concentrated on the projection of this set of states on E, then they have the same entropy and E = 0. In between (when +∞ > T > 0), Equation (12) constitutes an estimator of E, if T is s large enough.

Dynamic Entropy, Index of Robustness
The problem of robustness of a biological network controlling important functions, such as morphogenesis, memory, cell division, etc., has been often considered for 80 years [79][80][81] under different names (structural stability, resilience, bifurcation, etc.), but is still pertinent [82,83].
The dynamic entropy E serves as index of robustness, being related (as Kolmogorov-Sinaï entropy) to the ability a system has to return to its equilibrium after exogenous or endogeneous perturbations [23,43].
By considering a Hopfield rule with all non-zero interaction weights w ij having the same absolute value c, we will study the robustness of the network in response to the variations of c, by proving the following Propositions [39,42]: Proposition 1. Let us consider a deterministic Hopfield Boolean network, which is a circuit sequentially or synchronously updated with constant absolute value c for its non-zero interaction weights. Then, its dynamics are conservative, keeping constant on the trajectories the Hamiltonian function L defined by: where H denotes the classical Heaviside function. L(x(t)) is the total discrete kinetic energy, equal to the half of the global dynamic frustration F(x(t)) = i=1,n F i,(i−1)modn (x(t)), where F i,(i−1)modn is the local dynamic frustration defined between nodes (i−1) and i as follows: The Proposition 1 still holds if the network is a circuit whose transition functions are Boolean identity or negation, a circuit on which it is easy to calculate the global frustration F and to show that it characterizes attractors by remaining constant along them and decreasing inside their attraction basin like in discrete gradient dynamics ( Figure 5 Left) [84]. Figure 3). The maximum phase shift observed after an instantaneous translation of the limit cycle of the Wilson-Cowan oscillator is inversely proportional to the intensity of this perturbation (Figure 3 Thumbnail), which means that a population of oscillators must be perturbed by a sufficiently intense stimulus to be synchronized. Indeed, isochrons form spirals, which diverge from one another far from the limit cycle and the synchronization, are observed if the translation of the limit cycle falls between two isochrons having close phases f1 and f2. These isochrons can be reached by translating the limit cycle C sufficiently far from its initial position [75][76][77]: it is possible if attraction basins are sufficiently wide, and hence, if the isochronal entropy E m attractor is sufficiently large. Hence, if Eattractor can serve as index of complexity of attractor landscape, E m attractor can serve more precisely as index of synchronizability, which is particularly interesting in the study of biological clocks and the entrainment of biological rhythms. The Proposition 1 still holds if the network is a circuit whose transition functions are Boolean identity or negation, a circuit on which it is easy to calculate the global frustration F and to show that it characterizes attractors by remaining constant along them and decreasing inside their attraction basin like in discrete gradient dynamics ( Figure 5 Left) [84].  Let consider now the quantity ∂E/∂c, which denotes the derivative of the dynamic entropy E with respect to the common absolute value c. ∂E/∂c can be considered as the capacity the robustness parameter E has to resist to environmental variations of the network weights and we have from formula (13): ∂E/∂c = ∂E v /∂c − ∂E µ /∂c. Then we can prove the following result [29,42]:

Energy, Frustration and Dynamic Entropy
If the updating mode of the Hopfield network is sequential, we have: where the variance Var µ is taken for the invariant Gibbs measure µ defined by: where ∂µ x /∂c = ∂[exp((Σ i,j=1,n cα ij x i x j − θ)/T)/Z]/∂c; N i is the neighborhood of i made of the nodes j such as α ij 0, Z = Σ y∈E exp((Σ i,j=1,n cα ij y j y j − θ)/T) and ∂Z/∂c = Σ y∈E (Σ i,j=1,n α ij y i y j /T)Zµ y . Then: which implies: ,n α ij y i y j /T)(Σ i,j=1,n α ij x i x j /T)µ y µ x ]/Log2 − ∂µ x /∂c log 2 Z and µ x ∂log 2 µ x /∂c = (∂µ x /∂c)/Log2 Hence, we have: Let us define now the local cross-frustration G ij (x,y) = 1: if α ij = 1, x i y j = 0 or α ij = −1, x i y j = 1, and G ij (x,y) = 0 elsewhere, the global cross-frustration is G(x,y) = Σ i,j=1,n G ij (x,y). We can remark than G(x,x) = F(x). The local V(x,y) and conditional V(x) cross-energy functions are respectively defined as: V(x,y) = Σ i,j=1,n α ij x i y j /T = Q + (N) -G(x,y) and V(x) = Σ y∈E V(x,y).
Proof: It is the same proof as for Proposition 2 by replacing F with G.
Let remark now that dynamic entropy E is related to the conditional entropy E x as follows: Proposition 4. In parallel updating mode, we have: ∂E/∂c = -cVar v G + Cov v (J,G), were Var v and Cov v are respectively the variance and the covariance taken for the random variables G(x,y) and J(x,y) = log 2 M xy and for the joint measure v = µ x M xy .

Proof:
We have: ∂E/∂c = Σ x∈E (µ x ∂E x /∂c + E x ∂µ x /∂c). Hence, from Proposition 3, we get: where Z = Σ x,y∈E exp((Σ i,j=1,n cα ij x i y j − θ)/T) and ∂Z/∂c = Σ x,y∈E; i,j=1,n α ij x i y j /T)exp((Σ i,j=1,n cα ij x i y j − θ)/T) Then, we have: Finally, we get: ∂E/∂c = −cVar v G + Cov v (J,G) We have shown in the Propositions above a direct link between the sensitivity to c of entropies E µ , E x and E, and the variability of frustrations of the network; e.g., if the network is sequentially updated, E µ decreases when the variance of the global frustration F increases, because of Equation (15): ∂E µ /∂c = −cVar µ F [39]. Remarks: (1) The Proposition 2 still holds when we replace c, the absolute value of weights w ij , by the temperature T, and we have: ∂E µ /∂T = Var µ (F)/T 3 . (2) The Proposition 2 shows that there is a close relationship between the robustness (or structural stability) with respect to variations of the parameter c and the global dynamic frustration F. This global dynamic frustration F is in general easy to calculate. For the configuration x of the genetic network controlling the flowering of Arabidopsis thaliana [37], there is only one frustrated pair; hence, F(x) = 2 (Figure 5 Right). More generally, all the thermodynamic functions introduced here are calculable in theory and could serve to quantifying the trajectory (Lyapunov) stability and the structural stability. If the number of states of a discrete state space E is too large, or if integrals on a continuous state space E are too difficult to calculate, it is possible to use Monte Carlo procedures for getting the values of the variance of the global frustration and of the dynamic entropy.

Entropy Centrality of Nodes
There are four classical types of centrality in an interaction graph G (Figure 2 Top). The betweenness centrality [85] is the first type and is defined for a node k as follows: where β ij (k) is the number of shortest paths going from j to i through k, and β k = Σ I j∈G β ij (k). The degree centrality is the second type of classical centrality. It is defined from the notions of in-, out-or total-degree of a node i, which correspond respectively to the number of arrows of the interaction graph G having i respectively as end, start, or as both. For example, the in-degree centrality is defined by: where a ij denotes the general coefficient of the signed incidence matrix A of the graph G.
The closeness centrality is the third type of classical centrality. Closeness, as the inverse of the farness, is defined by considering the inverse of an average distance between i and j, over all nodes j of the interaction graph G: where the distance chosen here is l(i,j), the length of the closest path between i and j. The eigen-centrality or spectral centrality is the last classical centrality. It takes into account the fact that neighbors of a node i are possibly highly connected to the interaction graph, considering that connections to (possibly few) highly connected nodes contribute to the centrality of i as much as connections to (possibly numerous) weakly connected nodes. Hence, the eigenvector centrality C i eigen of the gene i measures more the global influence of i on the whole network and verifies [86]: where λ is the greatest eigenvalue of the incidence matrix of the interaction graph G.
The four centralities above can be very different ( Figure 6), but each has a specific interest: (i) betweenness centrality relates to the global connectivity with all nodes of the network, (ii) degree centralities correspond to the local connectivity with only nearest nodes, (iii) closeness centrality measures the proximity with other nodes using a distance on the interaction graph and (iv) eigen-centrality quantifies the ability of a node to be connected to nodes, possibly only a few, but with a high connectivity; hence, identifying important hub-relays controlling a wide sub-network.
As a complement of the classical centralities, we introduce now a new notion of centrality, called entropy centrality, taking into account the heterogeneity of the distribution of states of the neighbors of a node i, and not only its connectivity in the graph, and highlighting the symmetric sub-graphs having equi-distributed states in the neighborhoods of their nodes ( Figure 6): where v k denotes the kth frequency among s (s = 2 in Boolean case) in the histogram of the values of states observed in the neighborhood V i of the node i, set of the nodes out-or in-linked to the node i. Let consider now the quantity ∂E/∂c, which denotes the derivative of the dynamic entropy E with res The four centralities above can be very different ( Figure 6), but each has a specific interest: (i) betweenness centrality relates to the global connectivity with all nodes of the network, (ii) degree centralities correspond to the local connectivity with only nearest nodes, (iii) closeness centrality measures the proximity with other nodes using a distance on the interaction graph and (iv) eigencentrality quantifies the ability of a node to be connected to nodes, possibly only a few, but with a high connectivity; hence, identifying important hub-relays controlling a wide sub-network. Figure 6. Comparison between classical types of centrality in an interaction graph, where the diameter of the point representing a node of the network is proportional to its centrality value. Left: eigencentrality in an interaction graph corresponding to friendship relations in a high school. Middle: totaldegree centrality (same interaction graph than left). Right: entropy centrality in a quasi-symmetric graph of friendship.
As a complement of the classical centralities, we introduce now a new notion of centrality, called entropy centrality, taking into account the heterogeneity of the distribution of states of the neighbors of a node i, and not only its connectivity in the graph, and highlighting the symmetric sub-graphs having equi-distributed states in the neighborhoods of their nodes ( Figure 6): where vk denotes the kth frequency among s (s = 2 in Boolean case) in the histogram of the values of states observed in the neighborhood Vi of the node i, set of the nodes out-or in-linked to the node i.

Application to a Real Genetic Network
The genetic regulatory network controlling kinins and kallikrein described on Figure 7 corresponds to genes involved in a familial disease, angioedema, for which there are normal profiles of genetic expression and pathologic ones, particularly in the hereditary form of the disease. By studying the connectivity of the nodes of the interaction graph of the network, we can select four critical genes having a high connectivity power with high total degree (≥5) and entropy centrality: these genes are FAK1, SP1, F12 and SERPING1. Surprisingly, the last three of them are the most involved in the etiology of the angioedema, being susceptible of mutations and/or hormonal interactions, favoring the appearance of the disease. Four are represented circled in blue in the network of Figure 7 and three are in a blue line on Figure 8. Figure 6. Comparison between classical types of centrality in an interaction graph, where the diameter of the point representing a node of the network is proportional to its centrality value. Left: eigen-centrality in an interaction graph corresponding to friendship relations in a high school. Middle: total-degree centrality (same interaction graph than left). Right: entropy centrality in a quasi-symmetric graph of friendship.

Application to a Real Genetic Network
The genetic regulatory network controlling kinins and kallikrein described on Figure 7 corresponds to genes involved in a familial disease, angioedema, for which there are normal profiles of genetic expression and pathologic ones, particularly in the hereditary form of the disease. By studying the connectivity of the nodes of the interaction graph of the network, we can select four critical genes having a high connectivity power with high total degree (≥5) and entropy centrality: these genes are FAK1, SP1, F12 and SERPING1. Surprisingly, the last three of them are the most involved in the etiology of the angioedema, being susceptible of mutations and/or hormonal interactions, favoring the appearance of the disease. Four are represented circled in blue in the network of Figure 7 and three are in a blue line on Figure 8.    Figure 7. The genetic regulatory network ruling the expression of the genes involved in the angioedema disease. Genes circled in blue are those having a total-degree centrality more than five.   E attractor can be evaluated by the quantity − Σ k = 1,m ABRS(A k )log 2 [ABRS(A k )], where ABRS(A k ) is equal to the size of the attraction basin of the k th attractor A k among the m attractors of the network dynamics, divided by 2 n , where n is the number of genes of the studied network N (equal to 21 in the present case, by supposing that circular RNAs are absent). Then, in the present example, we have E attractor ≈ 0.4520. Hence, we have: E ≈ log 2 2 n − E attractor ≈ 21 − 0.4520 ≈ 20.55, which corresponds to a high value of the dynamic entropy, and consequently to a high structural stability of the genetic network N controlling the kinin-kallikrein system.
We can notice on Figure 8 Top that the physiologic attractor is a fixed point (PF3) whose attraction basin is made of 93.43% of the possible states of the network. The pathologic attractor corresponding to the hereditary form of the disease called angioedema is the fixed point PF10, at which there is the absence of the gene SERPING1 and the presence of the genes KLKB1 coding for kallikrein B1 and KNG coding for kininogen. The attraction basin of PF10 has a relative size equal about to 5% of the possible states of the network, and the observed prevalence of hereditary angioedema in the general population is reported to range from 1:10,000 to 1:150,000, without major sex or ethnic differences [87,88].
A last remark concerns the number of attractors (11 fixed points and no periodic attractor on Figure 8). A previous work [40] gives an algebraic formula allowing for the calculation of the number of attractors of a Boolean network; in their Jacobian interaction graph were two tangential circuits (Table 1): one positive of length right, (involved in the richness in attractors, as predicted by [32][33][34], and one negative of length four (responsible of the trajectory stability, as predicted by [36,89]). This predicted number (11) is the same as that calculated from the simulation of all trajectories from all possible initial conditions summarized in Figure 8, and more results both theoretical and applied to real Boolean networks can be found in more recent literature [46][47][48][49]90] showing a large spectrum of possible applications in genetic, metabolic or social Boolean networks. Table 1. Total number of attractors (here 8 in red) of two tangential circuits at the intersection of a line fixing the length L of a negative circuit (here 4) and a column fixing the length R of positive circuit (here 8) tangential to the negative one .   1  2  3  4  5  6  7  8  9   1  1  2  2  3  3  5  5  8  10  2  1  1  2  3  3  4  5  8  10  3  1  2  1  3  3  6  5  8  8  4  1  1  2  1  3  4  5  11  10  5  1  2  2  3  1  5  5  8  10  6  1  1  1  3  3  1  5  8  8  7

Conclusions and Perspectives
The notion of dynamic entropy is common to both discrete and continuous dynamical systems and can be applied to numerous situations in biological modeling in order to interpret the dynamical behavior of complex systems having several elements in interaction, leading to many observed trajectories until their ultimate asymptotic behavior, via the attractors. Biological regulatory networks are currently widely used to design the mechanisms of these interactions at different scales, from genes and cells [68,91,92] to individuals [46,93]. Between these extreme molecular and population levels, identical functions can be concerned from different species: from paramecia, a plant-like Bidens pilosus to humans as memory function [49,94] or motion control [95][96][97]. Because the corresponding regulatory systems are often random, future work will be done in the framework of general random systems [98]-in which the notions of entropy, stochastic attractor (confiner) and invariant measure can help to interpret and classify the observed data in many regulatory networks (neural, genetic, metabolic, social, etc.)-and in biomedical fields for practical applications (see for example [61,99]).

Author Contributions:
The four authors contributed to the research described in the article, have read and approved the final manuscript.
Funding: This research received no external funding.