## New Research In

### Physical Sciences

### Social Sciences

#### Featured Portals

#### Articles by Topic

### Biological Sciences

#### Featured Portals

#### Articles by Topic

- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology

# Limits of multifunctionality in tunable networks

Edited by Luis A. Nunes Amaral, Northwestern University, Evanston, IL, and accepted by Editorial Board Member Pablo G. Debenedetti December 14, 2018 (received for review May 2, 2018)

## Significance

Functionally optimized networks are ubiquitous in nature, e.g., in allosteric proteins that change conformation upon binding to a ligand or vascular networks that distribute oxygen and nutrients in animals or plants. Many of these networks are multifunctional, with proteins that can catalyze more than one substrate or vascular networks that can deliver enhanced flow to more than one localized region of the network. This work investigates the question of how many simultaneous functions a given network can be designed to fulfill, uncovering a phase transition that is related to other constraint–satisfaction transitions such as the jamming transition.

## Abstract

Nature is rife with networks that are functionally optimized to propagate inputs to perform specific tasks. Whether via genetic evolution or dynamic adaptation, many networks create functionality by locally tuning interactions between nodes. Here we explore this behavior in two contexts: strain propagation in mechanical networks and pressure redistribution in flow networks. By adding and removing links, we are able to optimize both types of networks to perform specific functions. We define a single function as a tuned response of a single “target” link when another, predetermined part of the network is activated. Using network structures generated via such optimization, we investigate how many simultaneous functions such networks can be programed to fulfill. We find that both flow and mechanical networks display qualitatively similar phase transitions in the number of targets that can be tuned, along with the same robust finite-size scaling behavior. We discuss how these properties can be understood in the context of constraint–satisfaction problems.

- multifunctionality
- network optimization
- mechanical networks
- flow networks
- constraint–satisfaction problems

Many naturally occurring and synthetic networks are endowed with specific and efficient functionality. For example, allosteric proteins globally adjust their conformation upon ligand binding to control the activity of a distant active site (1, 2). Venation networks in the leaves of plants are highly optimized for water and nutrient transport (3). In some cases, networks can change their function depending on the needs of the system; vascular networks in animals (4⇓–6), fungi (7), and slime molds (8) can reroute the transport of fluids, enhancing or depleting nutrient levels to support local growth or activity. Modern power grids must precisely distribute electrical energy generated from a limited number of sources to a large number of consumers with widely varying consumption needs at different times (9). All of these networks are optimized to some degree, by evolution via natural selection, dynamic reconfiguration, or human planning.

A key aspect of such functionality is the complexity of a specific task. We define a “function” as an optimized response of a localized component of a network when another predefined, localized component of the system is activated. A “task” is then defined as the collective response of a set of individual functions due to a single input. The number of functions representing a specific task is the task complexity.

In this work we address the limits of complexity for a single task: How many functions composing a single task can be programed into a network? We consider two examples: (*i*) mechanical networks—in which nodes are connected by central-force harmonic springs—locally flexing in response to an applied strain and (*ii*) flow (or resistor) networks—in which nodes are connected by linear resistors—locally producing a pressure drop due to an applied pressure at the source. These systems are related; flow networks are mathematically equivalent to mechanical networks embedded in one spatial dimension—but with a nontrivial node topology (10).

Macroscopic properties of mechanical networks, such as their bulk and shear moduli, can be tuned by modifying only a tiny fraction of the springs between nodes (11⇓–13) [in contrast to random removal (14)]. This idea was extended to show that such networks can be tuned to develop allosteric behavior via selective spring removal (15⇓–17). Allostery in these systems is a single-function task in which a randomly selected spring (the target) responds in a specified way to a strain imposed on a separate pair of nodes (the source). Here we study complex tasks in which multiple targets are controlled by a single source. We study the scaling of the maximal complexity of a task with network size by asking how many individual targets can be successfully tuned simultaneously and show that in both flow and mechanical networks, the limit of task complexity is set by a phase transition.

## Network Tuning Protocol

Our method for tuning networks follows the general scheme described in our previous work (15) with slight modifications. We start with 2D configurations of soft spheres with periodic boundary conditions created using standard jamming algorithms. We construct networks by placing nodes at the center of each sphere and links (edges) between nodes corresponding to overlapping particles. This ensemble of networks is used for both spring networks, in which edges are unstretched central-force springs, and flow networks, in which edges are resistive conduits. By using the same set of nodes and edges for both systems, we can directly compare results. We chose this ensemble because it is disordered and provides initial networks with properties reminiscent of the corresponding biological systems. Elastic networks with close-range interactions have often been used to model proteins (18), while many natural flow networks have high numbers of closed loops (19) and are highly interconnected (20). However, our networks do not exhibit modular topologies which can appear in some systems (9). While we briefly touch on modularity, an in-depth study of modular networks is outside the scope of this work.

For each network, a pair of source nodes is chosen randomly, along with a set of *SI Appendix* for global compression and shear sources in mechanical networks).

To control the response of the targets, we define the response ratio

Our optimization scheme involves minimizing a loss function which penalizes deviations from the constraints in Eq. **1** (*Materials and Methods*). Each optimization step consists of either removing a single link or reinserting a previously removed link to modify the network topology in discrete steps. More specifically, at each step we measure the resulting change in the loss function for each single-link removal or reinsertion and remove or reinsert the link to most decrease the loss function.

Fig. 1 depicts examples of both flow and mechanical networks which have been tuned using our prescribed method for the two different types of applied sources. Fig. 1 *A* and *B* shows flow and mechanical networks, respectively, tuned to respond to a source applied to a pair of nodes connected by an edge. Fig. 1 *C* and *D* shows the same networks, but with a pair of source nodes that are not connected by an edge.

## Results

For both flow and mechanical networks, we explore the effects of various aspects of the tuning problem, with particular focus on task complexity. Fig. 2 *A* and *B* displays typical results for the fraction of networks that can be tuned successfully,

In Fig. 2*C*, we plot the transition curves for all system sizes for the four cases studied on the same axes. Using the smoothing spline interpolations shown in Fig. 2 *A* and *B* (*SI Appendix*), we estimate the number of targets

Fig. 2 *D* and *E* shows that flow networks and mechanical networks have similar power-law behaviors for

## Discussion

We framed the problem of the maximum number of target edges that can be tuned successfully in a mechanical or flow network as a type of discrete constraint–satisfaction problem, in which we asked how many inequality constraints can be satisfied simultaneously. This places the tuning of multifunctionality in the context of a variety of other problems including jamming (21), spin glasses (22), the k-SAT problem (23), k-core percolation (24), and the perceptron (25). Much progress has been made by linking such transitions to the statistical physics of critical phenomena. The hallmark of these systems is the emergence of a SAT–UNSAT transition between regions in parameter space where the constraints can always (or with high probability) be satisfied and regions where the system is frustrated, such that not all constraints can be satisfied simultaneously (25). In mean-field and in some cases in finite dimensions, the SAT–UNSAT transition is a random first-order transition, with a discontinuous jump in the order parameter (the fraction of satisfied configurations

We have demonstrated a SAT–UNSAT transition in the complexity of a single task that can be tuned into disordered mechanical and flow networks. In both cases, the maximum task complexity diverges with a power law that is sublinear in N, the number of nodes in the network. The width of the SAT–UNSAT transition (relative to N) vanishes as N diverges, showing that the transition is a true phase transition.

Although we find *i*) the local or global nature of the source, (*ii*) the magnitude of desired change in target response Δ, (*iii*) disorder in the link topology, (*iv*) initial coordination of the network, and (*v*) the choice of whether to tune the link tensions (currents) or extensions (pressure drops) (*SI Appendix*). The values of ν lie in the range of 0.6–0.8, with the exception of one case of 1.0 for a very large relative change in target response of *SI Appendix*, Table S1). We find that the behavior is not well described by a power law for tuning negative relative changes in target response (*SI Appendix*).

Overall, the divergence of the maximum number of tunable targets with system size and the corresponding vanishing of the transition width (indicating the existence of a phase transition) are very robust observations for positive and sufficiently large relative changes in target responses. We note also that both mechanical networks and flow networks exhibit very similar quantitative behavior despite the fact that flow networks are purely topological, requiring no explicit spatial embedding.

The SAT–UNSAT transition of the task complexity problem introduced here represents a distinct class of discrete constraint–satisfaction transitions due to a complication that arises in the form of the constraints. When tuning a mechanical network, the removal of links can introduce soft modes, making it impossible to uniquely evaluate the network response and subsequently tune a given target. Similarly, in a flow network the tuning process can lead to regions being disconnected from the source, making it impossible to tune any target in that region. To avoid such cases, at each step of the tuning process we are forced to exclude specific link removals (*Materials and Methods*). In both mechanical and flow networks, we find that it becomes more and more likely to introduce a soft mode/disconnected region as the task complexity increases. This makes the problem more difficult to tackle both numerically and analytically compared with previously studied constraint–satisfaction transitions and may lead to differences in the nature of the transition.

For mechanical functions, a perfectly engineered mechanism (e.g., a pair of chopsticks, which creates a large displacement at the tips in response to strain applied where they are held) may perform exactly one function superlatively well, but we have shown that more complex network structures are able to adapt to a number of functions that diverge with the system size. The same argument holds for flow networks: An optimally engineered distribution network is a topological tree, perfectly suited for a specified task but at the same time “rigid,” in the sense that it cannot easily adapt to other tasks. The networks that we have studied are more complex than a pair of chopsticks or a topological tree, and this allows them to be tuned successfully to perform arbitrarily complex tasks.

Our finding that a disordered network topology allows for tunability may have relevance to real biological networks. For example, the development of certain vascular structures within animals is characterized by the initial appearance of a tightly meshed disordered network of veins (the vascular plexus) that is subsequently pruned and tuned to its function (26). The initial disordered network may be a prerequisite of the great variability and versatility seen in natural networks. The tuned mechanical networks serve as simple models for multifunctional allostery in proteins (with a single regulatory site that can control more than one active site, e.g., refs. 27 and 28) or multifunctional metamaterials. Our flow network results give insight into how to control, for example, blood and oxygen distribution in vascular systems or power in an electrical network. Indeed, we find very similar behavior in a flow network with nonplanar topology derived from the United Kingdom (UK) railroad network, which exhibits a high degree of modularity. *SI Appendix*).

Our results raise a number of issues for future investigation. The divergence in the task complexity and vanishing of the transition width with system size are reasonably well approximated by power laws but may deviate for larger system sizes (*SI Appendix*). The measured exponents appear to depend on many specific properties of the problems studied. This may be due to corrections to scaling or to a more fundamental deviation from power-law scaling. Also, it is not clear what conditions on the network topology are necessary to observe the transition we see. For example, networks with high degrees of modularity may not be able to support tasks spanning multiple neighborhoods. However, our results for the UK railroad network suggest that even in this case, we observe identical qualitative behavior, but with an overall decreased *SI Appendix*, *SI Text*). More generally, it has not been investigated how the results depend on network structure/topology and dimensionality nor how they depend on the tuning algorithm. For instance, the values of

One further aspect of our results deserves mention: A simple function that controls only a single pair of target nodes can be achieved in an extremely large number of ways. We have shown that a task can be complex with

Here we studied the limits of the complexity of a single task. It would be interesting to understand how many different tasks can be designed successfully and whether that is controlled by a similar SAT–UNSAT transition. Finally, we note that for the mechanical and flow networks studied here, the behavior is governed by a discrete Laplacian operator (29)—mechanical networks obey force balance on each node and flow networks obey Kirchhoff’s laws. However, many networks, such as gene regulatory networks, metabolic networks, social networks, etc., are nonconservative. Moreover, the problems we have studied are linear in their couplings but ecological networks or neural networks, for example, are typically nonlinear. It is known that even nonconservative and/or nonlinear networks, such as the Hopfield model and jammed packings, can support SAT–UNSAT transitions as well (30, 31). It would interesting to study systematically how conservation constraints and linearity affect the nature of the transition.

## Materials and Methods

### Linear Response.

Our networks are described by a set of N nodes and

To calculate the response of each type of network, we minimize the corresponding functional. In the case of flow networks, we minimize the power loss through the network,

Minimizing Eq. **2** for a flow network in the presence of externally applied boundary currents **3** in the presence of externally applied forces, we obtain

Consequently, the response of either type of network is calculated by solving the corresponding set of linear equations rewritten as

### Bordered Laplacian Formulation.

Calculating the linear response requires solving Eq. **7**. However, there are two complications. The first is that the Laplacian operator is in general not invertible due to the presence of global degrees of freedom. For a periodic network, in d dimensions, there are d global translational degrees of freedom. Second, we apply edge extension (pressure drop) sources, rather than tension (current) sources. These sources can be applied as constraints on the system. Using a bordered Laplacian formulation, we add a constraint for each global translation and for the source.

First, we define the extension (or pressure drop) of the source as

As a result, the system of equations we must solve is now

### Tuning Loss Function.

Framed according to Eq. **1**, the problem of tuning a complex task can be viewed as a constraint–satisfaction problem. The goal is to find a set of stiffnesses (conductivities) that simultaneously satisfy each constraint in Eq. **1**. To study this problem numerically, we recast it as an optimization problem in the style of ref. 25, in which we define an objective function that penalizes deviation of the system’s behavior from the desired multifunctionality. Thus, we introduce the loss function

### Optimization Method.

Our method for tuning a network involves minimizing the loss function in Eq. **14**. In the spirit of refs. 11 and 15, our optimization consists of removing or reinserting previously removed edges from the network one at a time, modifying the network topology in discrete steps. More specifically, we use a greedy algorithm in which we remove or reinsert the edge which minimizes the loss function at each step. This requires a calculation of the new response for each possible move.

Suppose we have a network whose stiffnesses at the current step are

To reduce numerical error and maintain the numerical invertibility of the bordered Laplacian, we define the quantity

We repeatedly add or remove edges until either the loss function is explicitly zero (i.e., all constraints are satisfied) or the relative change in the objective function is less than

## Acknowledgments

We thank S. Franz, C. P. Goodrich, D. Hexner, and N. Pashine for instructive discussions. This research was supported by the NSF through Grants DMR-1506625 (to J.W.R.) and PHY-1554887 (to E.K. and H.R.); the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-FG02-03ER46088 (to S.R.N.); the University of Chicago Materials Research Science and Engineering Center NSF Grant DMR-1420709 (to S.R.N.); the Simons Foundation for the collaboration Cracking the Glass Problem via Awards 454945 (to A.J.L.) and 348125 (to S.R.N.) and Investigator Award 327939 (to A.J.L.); and the Burroughs Welcome Career Award (to E.K. and H.R.).

## Footnotes

↵

^{1}J.W.R. and H.R. contributed equally to this work.- ↵
^{2}To whom correspondence should be addressed. Email: ajliu{at}upenn.edu.

Author contributions: J.W.R., H.R., A.J.L., S.R.N., and E.K. designed research; J.W.R. and H.R. performed research; J.W.R. and H.R. contributed new reagents/analytic tools; J.W.R. and H.R. analyzed data; and J.W.R., H.R., A.J.L., S.R.N., and E.K. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission. L.A.N.A. is a guest editor invited by the Editorial Board.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1806790116/-/DCSupplemental.

- Copyright © 2019 the Author(s). Published by PNAS.

This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).

## References

- ↵
- ↵
- ↵
- ↵.
- Tuma RF,
- Durán WN,
- Ley K

- ↵
- ↵
- ↵
- ↵
- ↵.
- Pagani GA,
- Aiello M

- ↵.
- Tang W,
- Thorpe MF

- ↵.
- Goodrich CP,
- Liu AJ,
- Nagel SR

- ↵.
- Hexner D,
- Liu AJ,
- Nagel SR

- ↵.
- Reid DR, et al.

- ↵
- ↵.
- Rocks JW, et al.

- ↵.
- Yan L,
- Ravasio R,
- Brito C,
- Wyart M

- ↵.
- Flechsig H

- ↵.
- Bahar I,
- Lezon TR,
- Yang LW,
- Eyal E

- ↵
- ↵
- ↵.
- O’Hern CS,
- Silbert LE,
- Liu AJ,
- Nagel SR

- ↵
- ↵.
- Mézard M,
- Parisi G,
- Zecchina R

- ↵.
- Schwarz JM,
- Liu AJ,
- Chayes LQ

- ↵.
- Franz S,
- Parisi G,
- Sevelev M,
- Urbani P,
- Zamponi F

- ↵.
- Bussolino F, et al.

- ↵.
- Schlessinger J

- ↵
- ↵.
- Redner S

- ↵.
- Folli V,
- Leonetti M,
- Ruocco G

- ↵
- ↵.
- Pellegrino S

- ↵.
- Sherman J,
- Morrison WJ

- ↵.
- Sussman DM,
- Goodrich CP,
- Liu AJ

- ↵.
- Calladine CR

## Citation Manager Formats

## Sign up for Article Alerts

## Jump to section

## You May Also be Interested in

### More Articles of This Classification

### Physical Sciences

### Related Content

- No related articles found.

### Cited by...

- No citing articles found.