Jump to content

Network entropy

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Pedrexus (talk | contribs) at 14:14, 8 January 2021 (Network Entropy: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In network science, the network entropy is a disorder measure derived from information theory to describe the level of randomness and the amount of information encoded in a graph [1]. It is a relevant metric to quantitatively characterize real complex networks and can also be used to quantify network complexity[1][2]. There are several formulations in which to measure the network entropy and, as a rule, they all require a particular property of the graph to be focused, such as the adjacency matrix, degree sequence, degree distribution or number of bifurcations, what might lead to values of entropy that aren't invariant to the chosen network description.[3]

Formulations

Degree Distribution Shannon Entropy

The Shannon entropy can be measured for the network degree probability distribution as an average measurement of the heterogeneity of the network.

This formulation has limited use with regards to complexity, information content, causation and temporal information. Be that as it may, algorithmic complexity has the ability to characterize any general or universal property of a graph or network and it is proven that graphs with low entropy have low algorithmic complexity because the statistical regularities found in a graph are useful for computer programs to recreate it. The same cannot be said for high entropy networks though, as these might have any value for algorithmic complexity.[3]

Random Walker Shannon Entropy

Suppose a random walker goes from node to any other node in a network. This results in a probability distribution if else , where is the graph adjacency matrix and is the node degree. From that, the Shannon entropy can be measured for each node, what results in a normalized network entropy , calculated by averaging the normalized node entropy over the whole network[4].

The normalized network entropy is maximal when the network is fully connected and decreases the sparser the network becomes . Notice that isolated nodes do not have its probability defined and, therefore, are not considered when measuring the network entropy. This formulation of network entropy has low sensitivity to hubs due to the logarithmic factor and is more meaningful for weighted networks [4], what ultimately makes it hard to differentiate scale-free networks using this measure alone. [2].

Random Walker Kolmogorov–Sinai Entropy

The limitations of the random walker Shannon entropy can be overcome by adapting it to use a Kolmogorov–Sinai entropy. In this context, network entropy is the entropy of a stochastic matrix associated with the graph adjacency matrix and the random walker Shannon entropy is called the dynamic entropy of the network. From that, let be the dominant eigenvalue of . It is proven that satisfies a variational principal [5] that is equivalent to the dynamic entropy for unweighted networks, i.e., the adjacency matrix consists exclusively of boolean values. Therefore, the topological entropy is defined as

This formulation is important to the study of network robustness, i.e., the capacity of the network to withstand random structural changes. Robustness is actually difficult to be measured numerically whereas the entropy can be easily calculated for any network, what is specially import in the context of non-stationary networks. The entropic fluctuation theorem shows that this entropy is positively correlated to robustness and hence a greater insensitivity of an observable to dynamic or structural perturbations of the network. Moreover, the eigenvalues are inherently related to the multiplicity of internal pathways, leading to a negative correlation between the topological entropy and the shortest average path length. [6]

Other than that, the Kolmogorov entropy is related to the Ricci curvature of the network [7], a metric that has been used to differentiate stages of cancer from gene co-expression networks [8], as well as to give hallmarks of financial crashes from stock correlation networks [9].

Maximum Entropy Principle

The maximum entropy principle is a variational principal stating that the probability distribution best representing the current state of a system is the one which maximizes the Shannon entropy [10]. This concept can be used to generate an ensemble of random graphs with given structural properties derived from the maximum entropy approach which, in its turn, describes the most probable network configuration: the maximum entropy principle allows for maximally unbiased information when lacking complete knowledge (microscopic configuration is not accessible, e.g.: we don't know the adjacency matrix). On the other hand, this ensemble serves as a null model when the actual microscopic configuration of the network is known, allowing to assess the significance of empirical patterns found in the network [11]

Complex Network Ensembles

It is possible to extend the network entropy formulations to instead measure the ensemble entropy. Let be an ensemble of all graphs with the same number of nodes and type of links, is defined as the occurrence probability of a graph . From the maximum entropy principle, it is desired to achieve such that it maximizes the Shannon entropy

Moreover, constraints shall be imposed in order to extract conclusions. Soft constraints (constraints set over the whole ensemble ) will lead to the canonical ensemble whereas hard constraints (constraints set over each graph ) are going to define the microcanonical ensemble. In the end, these ensembles lead to models that can validate a range of network patterns and even reconstruct graphs from limited knowledge, showing that maximum entropy models based on local constraints can quantify the relevance of a set of observed features and extracting meaningful information from huge big data streams in real-time and high-dimensional noisy data[11].

References

  1. ^ a b Anand, Kartik; Krioukov, Dmitri; Bianconi, Ginestra (2014). "Entropy distribution and condensation in random networks with a given degree distribution". Physical Review E. 89 (6).
  2. ^ a b Freitas, Cristopher GS; Aquino, Andre LL; Ramos, Heitor S; Frery, Alejandro C; Rosso, Osvaldo A (2019). "A detailed characterization of complex networks using Information Theory". Scientific reports Nature Publishing Group. 9 (1): 1–12.
  3. ^ a b Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper (2018). "A review of graph and network complexity from an algorithmic information perspective". Entropy. 20 (8): 551.
  4. ^ a b Small, Michael (2013). "Complex networks from time series: Capturing dynamics". 2013 IEEE International Symposium on Circuits and Systems: 2509–2512.
  5. ^ Arnold, Ludwig; Gundlach, Volker Matthias; Demetrius, Lloyd (1994). "Evolutionary formalism for products of positive random matrices". The Annals of Applied Probability: 859–901.
  6. ^ Demetrius, Lloyd; Manke, Thomas (2005). "Robustness and network evolution—an entropic principle". Physica A: Statistical Mechanics and its Applications. 346 (3): 682–696.
  7. ^ Lott, J.; Villani, C. (2009). "Ricci curvature for metric-measure spaces via optimal transport". Annals of Mathematics.
  8. ^ Sandhu, R.; Georgiou, T.; Reznik, E.; Zhu, L.; Kolesov, I.; Senbabaoglu, Y.; Tannenbaum, A. (2015). "Graph curvature for differentiating cancer networks". Nature Scientific Reports.
  9. ^ Sandhu, Romeil S; Georgiou, Tryphon T; Tannenbaum, Allen R (2016). "Ricci curvature: An economic indicator for market fragility and systemic risk". Science advances.
  10. ^ Jaynes, E. T. (1957). "Information Theory and Statistical Mechanics" (PDF). Physical Review. Series II. 106 (4): 620–630. Bibcode:1957PhRv..106..620J. doi:10.1103/PhysRev.106.620. MR 0087305.
  11. ^ a b Cimini, Giulio; Squartini, Tiziano; Saracco, Fabio; Garlaschelli, Diego; Gabrielli, Andrea; Caldarelli, Guido (2019). "The statistical physics of real-world networks". Nature Reviews Physics. 1 (1): 58–71.

Network Entropy