The adaptive stochasticity hypothesis: Modeling equifinality, multifinality, and adaptation to adversity

Edited by Andrew Zalesky, University of Melbourne, Melbourne, VIC, Australia; received May 4, 2023; accepted August 25, 2023 by Editorial Board Member Linda R. Petzold
October 10, 2023
120 (42) e2307508120

Significance

Some of the most fundamental elements of developmental theory—stochasticity, equifinality, multifinality, and adaptability—are among the hardest to study empirically. Here, through generative modeling, we investigated the role that intrinsic developmental stochasticity has on neural phenotypes. By running over 10 million simulations, we find that weaker constraints lead to greater multifinality; less sensitivity to increases in developmental noise; greater relative robustness; and a greater likelihood of non-normative outcomes. With empirical data, we show that children from low socioeconomic environments follow this developmental pattern: Their connectomes are better approximated through more stochastic generative models. We put forward the adaptive stochasticity hypothesis which states that heightened stochasticity within the developing brain may serve as an adaptive mechanism in situations of environmental uncertainty.

Abstract

Neural phenotypes are the result of probabilistic developmental processes. This means that stochasticity is an intrinsic aspect of the brain as it self-organizes over a protracted period. In other words, while both genomic and environmental factors shape the developing nervous system, another significant—though often neglected—contributor is the randomness introduced by probability distributions. Using generative modeling of brain networks, we provide a framework for probing the contribution of stochasticity to neurodevelopmental diversity. To mimic the prenatal scaffold of brain structure set by activity-independent mechanisms, we start our simulations from the medio-posterior neonatal rich club (Developing Human Connectome Project, n = 630). From this initial starting point, models implementing Hebbian-like wiring processes generate variable yet consistently plausible brain network topologies. By analyzing repeated runs of the generative process (>107 simulations), we identify critical determinants and effects of stochasticity. Namely, we find that stochastic variation has a greater impact on brain organization when networks develop under weaker constraints. This heightened stochasticity makes brain networks more robust to random and targeted attacks, but more often results in non-normative phenotypic outcomes. To test our framework empirically, we evaluated whether stochasticity varies according to the experience of early-life deprivation using a cohort of neurodiverse children (Centre for Attention, Learning and Memory; n = 357). We show that low-socioeconomic status predicts more stochastic brain wiring. We conclude that stochasticity may be an unappreciated contributor to relevant developmental outcomes and make specific predictions for future research.
Human brain structure is the result of a complex and dynamic interplay among various constraints. Foremost among them are the genomic information children receive from their parents and the environment in which they grow up (1). In the literature, these two factors—together with the interaction between them—are often credited with explaining the whole of phenotypic variation in brain development across the population (2). However, this overlooks a critical fact: that development unfolds stochastically (3, 4).
Stochasticity refers to the fact that biological development is probabilistic, rather than deterministic; there is an element of intrinsic randomness or noise in the relationship between earlier and later states (5). This appears to be an integral—rather than artifactual—feature of many developmental processes. At the cellular level, due to nonlinear interactions between molecules and entropy-induced variation, identical biochemical processes result in different outcomes across individuals (6). In neurodevelopment itself, stochasticity is particularly operative through randomness in the transcription and translation of key proteins (7), axonal outgrowth, (8, 9) and dynamics of spontaneous neuronal firing and synaptic transmission (10, 11). These processes converge to produce stochastic influences on the morphology of macroscopic brain regions. The relative importance of stochasticity as a contributor to phenotypic outcomes likely varies by domain. Some features—such as the volume of the brainstem—are highly heritable and appear to be tightly governed by genetic constraints (12), while others—such as the microstructure of association tracts—are only weakly heritable and vary greatly even between genetically identical individuals (13). While some of this nonheritable variation is due to differential environmental exposures, which engage experience-dependent neural processes (14, 15), other variation is likely attributable to inherent stochasticity within brain development.
As developmental stochasticity heightens intraindividual variability, its contribution to phenotypic outcomes is an adaptive feature that favors species success in environmental challenges (6). Evolutionary pressures may therefore have favored a heightened role for stochastic developmental processes in harsh and uncertain environments. Exposure to unpredictability early in life is a robust predictor of later behavior across species (16), including cognitive and emotional outcomes in humans (1721). This is thought to reflect adaptive responses to ancestral cues or to statistical learning of environmental changes (22). Could ontogenetic stochasticity account for some of this pathway? In other words, could stochastic processes in neural development mediate adaptation to unpredictable early-life environments?
Despite being an inherent feature of neural development, across multiple levels of analysis, stochasticity is largely neglected in empirical studies. One reason for this lacuna could be the difficulty of successfully separating stochastic effects from unknown deterministic effects (3), let alone manipulating it to evaluate the magnitude of its contribution to a particular outcome.
A promising path toward addressing this gap lies in computational modeling, which permits a quasi-experimental approach to understanding the emergence of neural phenotypes. One such method, generative network modeling, probabilistically simulates realistic whole-brain networks (23, 24). In this model, nodes within a developing network form connections based on an economic trade-off between two parameters: the cost of forming a connection vs. the value that connection may bring. Crucially, this trade-off can be constrained to varying extents by modulating parameter magnitude. Highly constrained simulations minimize stochasticity within the generative process, which may lead to limited phenotypic variability. In contrast, weaker wiring constraints allow for greater randomness from one step of the generative process to the next, which may thereby produce greater phenotypic variation. Thus, in this model of the brain, stochasticity is an integral and manipulatable element of development.
In this work, we explored the contributions of stochasticity to whole-brain organization by analyzing repeated runs of the generative network model (2426). Starting from a prenatal scaffold of brain architecture obtained from the Developing Human Connectome Project (dHCP, n = 630), we generated over ten million plausible brain networks. As the basic organization of the human connectome is present at birth but undergoes refinement over the course of development (2729), the whole-brain simulations offer a window into both pre- and postnatal influences on neural connectivity. These may include, for example, the relationship between maternal stress and reduced connectivity in utero (30) or the proposed link between deprivation and the segregation and integration of brain networks across childhood (31).
We therefore analyzed the simulations to answer the following questions: Does developmental stochasticity lead to variability in brain network outcomes? Are some connectomes more sensitive to the effects of heightened developmental stochasticity than others? What advantage might stochastic development confer on brain networks? Through this work, we produce a framework for understanding determinants of variability in brain network organization.
To test the empirical plausibility of our theoretical framework, we then simulated the connectomes of a sample of neurodiverse children (Centre for Attention, Learning and Memory; CALM, n = 357). Children who grew up in early socioeconomic deprivation showed macroscopic brain networks that appear to organize more stochastically. We propose this finding may reflect an adaptive connectomic response to unpredictable features of the early environment. We conclude with specific predictions and recommendations for future investigation that follow from our framework.

Results

To simulate the formation of brain network connectivity, we employed a generative network model. This model is increasingly used in computational neuroscience to simulate highly plausible brain networks (25, 26, 3238). The model does this by adding connections iteratively based on dynamic economic negotiations between their costs and topological values (23, 24):
pi,j(di,j)η(ki,j)γ.
[1]
The di,j   term represents the cost of a connection between two given nodal regions i and j, which we approximated using the Euclidean distance between the regions. The ki,j term represents the topological value of this connection, which we estimated using the normalized overlap in connectivity between regions i and j. pi,j refers to the overall the probability of forming a fixed binary connection between nodes i and j and is proportional to the parametrized multiplication of costs and values. Two wiring parameters, η and γ , respectively, scale the contributions of each term to wiring probability. In other words, by varying η and γ , it is possible to impose different constraints on the developing network.
The generative model begins from an initial minimal scaffold of connectivity. To root our simulations in a realistic representation of neonatal brain structure, we reconstructed a core rich-club network from data collected from the dHCP (Methods and SI Appendix, Figs. S1 and S2). Because connections form within this initial scaffold probabilistically, the generative model is intrinsically stochastic. This means that there may not be a clear one-to-one relationship between wiring parameters and outcomes. For example, running the model multiple times using the same η and γ parameters may lead to highly dissimilar phenotypes, while running the model with different η and γ parameters may lead to highly similar phenotypes (Fig. 1A).
Fig. 1.
Schematic of dissimilarity procedure. (A) A schematic illustration of how stochasticity in the developing simulation leads to variable outcomes. For example, the same wiring constraints can lead to quite dissimilar outcomes (i.e., multifinality, Top Right) and different wiring constraints can lead to similar outcomes (i.e., equifinality, Top Left). (B) For each parameter combination, we ran 625 repeated simulations. To calculate the dissimilarity between these network outcomes, two measures were produced. The first was a topological dissimilarity measure, which captures the dissimilarity in global measures of network topology between each pairwise combination of network outcomes at each parameter combination (Top). The second was an embedding dissimilarity measure, which captures the dissimilarity in edge existence between each pairwise combination of network outcomes at each parameter combination (Bottom).

Experiment 1—The Effect of Constraints on Multifinality.

We first set out to determine the relationship between constraints on development and variability in network outcomes. This is equivalent to multifinality, a known developmental principle by which the same developmental history can result in diverse phenotypes (39) (Fig. 1 A, Top Right). Specifically, we asked does changing the wiring constraints systematically manipulate its intrinsic stochasticity, thereby leading to altered variability in observed brain organization?
We addressed this question by undertaking repeated runs of the generative model with identical wiring parameters, at multiple combinations of η and γ . In order to quantify the resulting variability in network outcomes, we calculated two measures: topological and embedding dissimilarity (Fig. 1B and Methods, Experiment 1—Network Outcome Dissimilarity). Topological dissimilarity captures how different networks are to each other in terms of their organizational properties; a low topological dissimilarity indicates similar properties across repeated runs of the simulation, while a high topological dissimilarity indicates multifinality. In contrast, embedding dissimilarity tests whether networks resemble one another in the precise location of their connections. A low embedding dissimilarity means that repeated runs of the simulation share many specific connections between the same nodes, while a high embedding dissimilarity indicates multifinality. These two measures are related but separable: two networks with identical embedding would necessarily have the same topology, but it is possible for two networks to have the same topology but be embedded quite differently in anatomical space.
We first explored the topological dissimilarity across the wiring parameter space (Fig. 2A). We found that constraints on wiring are in fact systematically related to multifinality in brain network organization. However, η   and γ   do not contribute to topological dissimilarity in the same way. Namely, γ   , which guides the extent to which regions prefer connecting to regions with similar neighborhoods, has a much larger influence on network stochasticity (R2, 76.3%) compared to η   , which penalizes long-distance connections (R2, 0.1%) (Fig. 2B). In Fig. 2C, this result is presented schematically, illustrating that weaker γ constraints drive networks to be topologically highly dissimilar across runs, while fewer differences emerge when the γ parameter is larger in magnitude.
Fig. 2.
Weaker brain wiring constraints increase multifinality in network outcomes. (A) The topological dissimilarity landscape is given across the wiring parameter space. Purple corresponds to highly topologically dissimilar networks while blue corresponds to low topological dissimilarity. (B) A scatter plot of the topological dissimilarity as a function of the wiring parameters shows that γ most drives topological dissimilarity. (C) Variable outcomes are more likely with weaker topological constraints on the network development (highlighted by the light purple wider funnel) and vice versa for highly constrained networks (highlighted by the light blue narrower funnel). (D) The embedding dissimilarity landscape is given across the wiring parameter space. Green corresponds to highly dissimilar networks in terms of embeddings while yellow corresponds to low embedding dissimilarity (E) A scatter plot of the topological dissimilarity as a function of the wiring parameters shows that η most drives embedding dissimilarity. (F) Variable outcomes are more likely with less embedding constraints on the network development (highlighted by the light green wider funnel) and vice versa for highly constrained networks (highlighted by the light yellow narrower funnel).
We then explored the embedding dissimilarity across the wiring parameter space. As shown In Fig. 2D, an opposite effect occurs, in which η   has a much larger influence (R2, 83.5%) compared to γ   (R2, 8.3%) (Fig. 2E). The schematic in Fig. 2F illustrates this, showing that weaker η constraints drive networks to more dissimilar embedding states, while fewer differences emerge when the η parameter is larger in (negative) magnitude.
These findings can be interpreted quite straightforwardly from the wiring equation. As γ determines the extent to which nodes with similar connectivity profiles wire with each other, a strong γ favors a similar topological arrangement across runs. Across repeated runs, the same organizational features will emerge. When η has a large magnitude, the simulation favors the formation of local short connections—these are more consistently present across multiple runs, increasing the embedding similarity. Thus, across multiple runs of the simulation, the same brain wiring parameters can produce networks that vary in organizational properties (topological dissimilarity) and in the precise location of their connections (embedding dissimilarity), depending upon the severity of each of the two wiring constraints. This interpretation is consistent with the notion that η influences where connectivity patterns are located (i.e., embedding) and γ influences the strength of connectivity patterns (i.e., topology) (36).
Overall, weaker wiring constraints result in more heterogeneity in brain network outcomes, while stronger wiring constraints reduce stochasticity and limit the range of possible outcomes.

Experiment 2—The Effect of Noise on Multifinality.

Our first experiment revealed that weakening wiring constraints leads to greater multifinality in the organization and spatial localization of the connectome. But can variation in network outcomes be increased by enhancing the noisiness of network development itself? In other words, does up-regulating developmental stochasticity increase multifinality? And if so, when does this intervention have the greatest influence on network outcomes?
To answer these questions, we heightened the stochasticity of the simulations at different stages of network development. To do this, we allowed the model to choose 5% of total connections completely at random either at the start of, halfway through, or in the final steps of the generative process (termed early, middle, and late, respectively) (Methods, Experiment 2—Timing of Noise Analysis).
We first examined the effect of heightened developmental noise on topological dissimilarity. Our results indicate that the later the noise is injected into the developmental simulation, the greater the increase in topological dissimilarity across multiple runs of the same parameters (Fig. 3 A–C and SI Appendix, Fig. S4, ANOVA F2,1872, = 134.44, P = 2.774 × 10−55; early M = 1.242 × 103, SD = 6.685 × 103; middle M = 8.212 × 103, SD = 9.826 × 103; late M = 8.296 × 103, SD = 9.348 × 103). The effect was predominantly present at higher values of γ (SI Appendix, Fig. S4 IK), where the simulations are more strongly driven by the wiring rule. Thus, the topology of networks that are following a more deterministic, rule-based approach to development is more sensitive to the effects of injecting noise.
Fig. 3.
Up-regulating developmental noise increases variability in network outcomes. Injecting stochasticity (A) early, (B) middle, or (C) late in the generative process increased the topological dissimilarity across repeated runs of each parameter combination, with middle and late noise exerting a greater impact (ANOVA F2,1872, = 134.44, P = 2.774 × 10−55), especially at higher values of γ . Injecting stochasticity (D) early, (E) middle, or (F) late in the generative process also increased the embedding dissimilarity, with early noise exerting a greater impact (ANOVA F2,1872, = 140.92, P = 9.792 × 10−58), especially at lower values of η.
We next evaluated the impact on embedding dissimilarity. Interestingly, we find the opposite phenomenon, showing that the earlier noise is injected into the developmental simulation, the greater the increase in embedding dissimilarity across runs (Fig. 3 DF and SI Appendix, Fig. S5, ANOVA F2,1872, = 140.92, P = 9.792 × 10−58; early M = 0.0298, SD = 0.0246; middle M = 0.0126, SD = 0.0144; late M = 0.0103, SD = 0.0101). Injecting noise results in a greater increase in outcome variability when models are further from the origin of the parameter space (SI Appendix, Fig. S5 IK). This greater impact of early stochasticity on the final layout of connections is consistent with the importance of the initial scaffold upon which network development unfolds.
In summary, the timing of heightened ontogenetic stochasticity shapes its impact on resulting network outcomes. Weaker wiring constraints (i.e., low magnitude parameters) broadly protect against the impact of injecting noise at any time point during development. Networks that are developing under stronger wiring constraints are more sensitive, but this effect depends on the timing of the intervention. Namely, early stochasticity has little effect on topological dissimilarity but greatly increases embedding dissimilarity, while late stochasticity greatly increases topological dissimilarity with a negligible impact on embedding. This indicates that topological characteristics of the network can recover from temporary increases in developmental noise, given sufficient time, but that the precise identity and locations of the connections that form depend strongly on the initial scaffold.

Experiment 3—The Effect of Constraints on Robustness.

Our first two experiments revealed that networks with weaker wiring constraints exhibit greater multifinality but are less sensitive to the effects of temporary increases in developmental noise. In contrast, highly constrained networks with exhibit less multifinality but are more vulnerable to such interventions, depending on their timing.
As an evolved system, the brain is best understood in light of selective pressures that shaped its features across evolutionary history (40). This includes the brain’s potential to reach multiple phenotypic outcomes. Why might this be advantageous? One possibility is that higher stochasticity within development, which leads to multifinality, confers robustness to external perturbation. This phenomenon has been observed in developmental systems in biology at multiple levels (41).
To examine this possibility, we tested whether higher stochasticity within the simulation confers robustness to targeted attacks on nodes with high levels of connectivity. Fig. 4A provides a schematic overview of the experimental procedure. In short, by measuring the resilience of each network’s communicability to the removal of nodes, we obtained a measure of relative robustness to external perturbation (Methods, Experiment 3—Robustness Analysis). The smaller the change in response to simulated attacks, the greater the robustness.
Fig. 4.
Weaker wiring constraints confer relative resilience to simulated attacks. (A) A schematic demonstration of the robustness testing protocol. Robustness is estimated by quantifying how much network communicability decreases upon the removal of network nodes. (B) The β coefficient computed from a targeted attack regime, which preferentially removes network hubs, across the parameter space. More constrained networks (Top Left of the landscape) are less robust, as the gradient of change is greater. Weakly constrained networks (Bottom Right of the landscape) exhibit less change. To the right of the landscape, the communicability trajectories of networks with the least (Left) and most (Right) robustness to change are presented. (C) A schematic showing that weakly constrained networks (which achieve more multifinality, as indicated by the funnel width), are relatively more robust to attack.
Our findings indicate that networks with stronger constraints on their topology are more vulnerable to targeted attacks relative to networks with weaker constraints (Fig. 4B). In SI Appendix, Fig. S6, we show this also true for a random attack regime. Given that strong wiring constraints produce topologically invariant networks, these may rely more strongly on core hub nodes, resulting in a proportionally larger drop in network communication when such nodes are removed. However, it is important to note that this large drop still leaves the networks at a higher absolute capacity for communication than networks that develop with weaker wiring constraints. Thus, networks developing more deterministically are more vulnerable to change in relative terms but are initially more suited to supporting communication. For a schematic presentation of this finding, see Fig. 4C.
It is important to note that while networks with weaker constraints have lower absolute communicability, this may not necessarily be a disadvantage. Indeed, these relatively random networks are more likely to support efficient communication because the communication paths across the network tend to be less redundant (42). We show this trade-off between communicability and efficiency within our simulations in SI Appendix, Fig. S7.
In summary, heightened stochasticity within network development has a protective effect on the structure of the network, just as it protects against the impact of increased developmental noise (as shown in Experiment 2). This resilience to change is accompanied by greater efficiency and lower initial communicability.

Experiment 4—The Effects of Constraints on Equifinality.

We have so far established that weakening wiring constraints leads to greater multifinality in network outcomes and robustness to external perturbation, while highly constrained networks tend to have more invariant outcomes along with a greater vulnerability to change. Before turning to empirical data, we aimed to examine the relationship between stochasticity and a final core developmental principle: equifinality.
Equifinality refers to the attainment of a similar phenotype by way of a diverse set of pathways (39) and is a known characteristic of child neural development (43). Numerous contributing factors, such as similar environments and experiences, may contribute to similar phenotypes between two genetically different individuals.
We hypothesized that both the wiring constraints and the intrinsic stochasticity of the developing brain could modulate the resultant equifinality by setting the range of outcomes that any individual could exhibit. In other words, we predicted that brains generated using relatively similar wiring parameters would exhibit greater overlap in the range of potential phenotypic outcomes than those generated using more dissimilar parameters. In particular, that models with similar γ parameters would achieve more equifinal topological properties (Results, Experiment 1—The Effect of Constraints on Multifinality). However, we also expected that equifinality would depend on the intrinsic stochasticity of the models. Namely, we thought that simulations with weaker constraints would achieve lower equifinality with other parameter combinations than more constrained simulations, due to the broader range of potential outcomes of the simulation.
To test this prediction, we assessed the ability of a supervised machine learning model to successfully distinguish simulations run with differing wiring parameters based on their topology. Specifically, we trained a support vector machine (SVM) to distinguish all possible pairwise comparisons of the topological properties of the simulations (Methods Experiment 4—Equifinality Analysis). Using 10-fold cross-validation, we computed the misclassification rate of the SVM. A higher rate would indicate that the SVM was performing closer to chance and that the two sets of simulations were exhibiting equifinality. Lower misclassification rates would indicate that the SVM was successfully able to distinguish the two sets of simulations, and when this rate nears zero, the two sets could not be said to exhibit any equifinality.
As predicted, simulations run with closer γ   parameters tended to show higher equifinality (Fig. 5A, r = −0.330, P < 0.0001). This reflects that a similar preference for wiring value results in simulations that occupy an overlapping range of phenotypes (Fig. 1A). Furthermore, simulations with higher multifinality in network topology also exhibited a lower mean misclassification rate (Fig. 5B, R2= 61.9%, P < 0.0001), indicating that developmental stochasticity was inversely related to equifinality. Thus, it appears that networks that grow less deterministically are more likely to end up in unique phenotypic outcomes that are easily distinguishable from more normatively developing networks.
Fig. 5.
Similar wiring constraints and deterministic development lead to equifinality in network topology. A SVM was trained to distinguish simulations run with different parameters. The SVM sought to correctly classify the runs of the simulations using their topological properties. The mean misclassification rate refers to the mean proportion of the sample that was incorrectly classified across all pairwise comparisons, and was obtained using cross-validation. (A) A density plot of equifinality, measured using the misclassification rate of the SVM, by the distance between the γ parameters of the two simulations. (B) The equifinality exhibited by a simulation, indexed using the mean misclassification rate of the SVM, by the topological dissimilarity of that simulation. The mean was taken over all pairwise comparisons of that simulation.
Does this pattern hold for the embedding of connections as well? In other words, does equifinality in embedding decrease with more disparate cost constraints and with more stochastic development? To test this possibility, we trained the SVM to distinguish all possible pairwise combinations of networks based on the connectivity matrices themselves. The mean misclassification rate was consistently near 50% for each pairwise comparison, indicating that a more sensitive classifier would be needed to test for determinants of equifinality in embedding.

Prediction—Environmental Uncertainty May Favor Weaker Wiring Constraints.

Experiment 1—The Effects of Constraints on Multifinality established that weaker wiring constraints lead to greater multifinality, while stronger wiring constraints narrow the range of phenotypic outcomes of the network. Experiment 2—The Effects of Timing on Multifinality demonstrated that weaker wiring constraints are protective against the impact of heightened developmental noise, whereas stronger constraints allow noise to produce a time-dependent increase in multifinality. Experiment 3—The Effects of Constraints on Robustness revealed that weaker constraints have a protective effect on the structure of the network, operationalized as resilience to relative change in communicability in response to targeted and random simulated attacks. However, this resilience is relative, rather than absolute. Finally, Experiment 4—The Effects of Constraints on Equifinality shows that networks developing more stochastically more often exhibit strongly unique phenotypes, meaning that there is a lesser likelihood of equifinal outcomes. Together, our findings reveal that stochasticity is a critical element of the successful simulation of brain networks, which may correspond to real-world developmental processes.
Our theoretical-computational framework gives rise to falsifiable hypotheses about child development. Namely, we propose that children growing up in unpredictable contexts (i.e., environments with fewer statistical regularities) have brain networks whose organization is best approximated with weaker wiring constraints (i.e., smaller magnitude η and/or γ ). As per Experiment 1, this would correspond to more stochastic development. As per Experiments 2 and 3, such heightened stochasticity would make brain networks less sensitive to perturbation, which is likely a useful feature in uncertain environments. Finally, as Experiment 4 demonstrated, this upregulation of stochasticity could lead to a greater likelihood of aberrant neural phenotypes as a by-product.
To test this prediction, we obtained data from the CALM (n = 357) (see Methods for more detail and SI Appendix, Table S1 for demographics). To approximate the unpredictability of the early-life environment, we measured the Index of Multiple Deprivation (IMD) of each child, which captures their relative socioeconomic disadvantage. We then split subjects into high and low socioeconomic status (SES) groups using the sample median IMD (SI Appendix, Fig. S3). Next, we constructed structural connections of each child’s connectome using diffusion imaging and probabilistic tractography with anatomical constraints and computed the best estimate of each child’s ground-truth wiring parameters (Methods). In line with our prediction, we found that subjects with high SES show higher (negative) wiring parameter magnitude in η   (Fig. 6, Left; P = 0.0247, Cohen’s d = 0.239). We found no detectable difference in the γ   parameter (Fig. 6, Middle; P = 0.817). Finally, we found a corresponding better model fit in lower SES children (Fig. 6, Right; P = 0.019, Cohen’s d = 0.250), which accords with the fact that parameters are simulating more randomly organized networks—a comparatively easy target (see ref. 34). We corroborated our statistical findings via a permutation testing procedure (Methods).
Fig. 6.
Wiring parameters and model fits vary with SES. In n = 357 children, we split groups into high and low SES groupings. We find that the wiring parameter η is greater in negative magnitude in high SES children, suggesting more constrained connectivity (Left). We find no difference in the γ parameter between groups (Middle). Model fits are better in low SES networks, due to the connectome being more randomly organized and therefore more easily simulated by the generative model (Right).
In summary, we explored the idea that—as stochasticity confers both robustness to perturbation and intrinsic variability in phenotypic outcomes—children within low-SES environments show an adaptive preference for heightened stochasticity within the topology of their macroscopic brain networks. Our findings, which show that the connection length constraint η is weakened in children from low-SES environments, appear to be consistent with this hypothesis.

Discussion

Some of the most fundamental elements of developmental theory—equifinality, multifinality, and adaptability—are among the hardest to study empirically. In this work, through generative network modeling, we were able to investigate the role of stochasticity in the emergence of macroscopic brain networks. Importantly, this computational framework provided a means to systematically study these core developmental concepts. Through our simulations, we found that weaker wiring constraints lead to greater multifinality in brain network phenotypes, less sensitivity to temporary increases in developmental noise and greater relative robustness to simulated attacks, and greater likelihood of atypical phenotypes (Fig. 7). By fitting our models to empirical data, we found that children from low-SES environments appeared to follow this developmental pattern: Their connectomes were better approximated through a more stochastic generative model with weaker constraints on long-distance connections. This difference could reflect both prenatal effects via differences in the uterine environment and maternal health (44, 45) and postnatal effects via experience-dependent developmental processes (31, 46).
Fig. 7.
Contribution of developmental stochasticity to multifinality, equifinality, and robustness of brain networks. Each cone represents a stochastic manifold, or the range of possible paths a developing network may take. The width of the cone indicates the range of phenotypic outcomes that may be achieved (multifinality). At weak wiring constraints, the cones are wide (indicating more stochastic development), while at strong wiring constraints, they are narrow (indicating more deterministic development). The relative angle of the cones indicates the likelihood of ending up in a similar range of phenotypic values (equifinality). Stochasticity is inversely proportional to the equifinality achieved; weak constraints more often produce diverse and unique phenotypic outcomes. Finally, stochastically developing networks tend to be more robust to perturbation and change than their deterministically developing counterparts.

The Adaptive Stochasticity Hypothesis.

Our work has highlighted, at a computational level, several core effects of developmental stochasticity. For example, it affords relative robustness to perturbation. That is, while it lowers the absolute capacity to support network communication, it confers resilience to change. Developmental stochasticity also inoculates networks against the impact of temporary increases in noise. Finally, higher stochasticity results in greater multifinality and more distinct network outcomes.
Are these features advantageous or disadvantageous? This likely depends on the environmental context of the developing system. In an enriched environment that is statistically predictable, stochasticity may be disadvantageous because it introduces greater risk of unfavorable outcomes that are unsuited to that narrow context. In contrast, networks developing within a statistically unpredictable environment benefit from what stochasticity can provide: flexibility to unexpected perturbation and robustness to change. The adaptive benefit of heightened stochasticity could coexist alongside potentially detrimental effects, such as elevated energetic expenditure and lower absolute communicability. However, other research has shown that costly life-course strategies can have adaptive benefits that outweigh such harms in adverse environments (16). This is particularly true when the change occurs early in life and helps the organism achieve developmentally sensitive aims—such as reproduction or competition.
We therefore put forward the adaptive stochasticity hypothesis. This states that heightened stochasticity within the developing brain may serve as an adaptive mechanism in situations of environmental uncertainty. In other words, it could constitute an active response that the brain implements in stressful or uncertain environments, potentially through upregulation of cellular stochastic processes. This would mirror demonstrations that neuronal noise can itself be an important source of information within developing neural systems, as it allows synapses to better communicate their degree of uncertainty (11).
Our empirical finding that the brains of children from low socioeconomic backgrounds—which we use as an approximate measure of environmental predictability—are better simulated with more stochastic models offers preliminary and partial support for this hypothesis. The next crucial step is to explore child development longitudinally and test whether heightened stochasticity in brain networks confers a neural or cognitive advantage at particular points in development. For example, does it support more efficient task-switching in adolescence?

Predictions.

The adaptive stochasticity hypothesis gives rise to numerous testable predictions. If unpredictability increases developmental stochasticity in brain networks:
Connectomic variability. Connectomic phenotypes should vary more among children who live in unpredictable environments. Our findings suggest that this variability may be more prominent in the embedding of networks, rather than their global topological properties.
Separable intrinsic and extrinsic influences. Intrinsic (e.g., genomic) and extrinsic (e.g., environmental) factors should predict at least partially nonoverlapping variance in wiring parameters.
Topological signature of randomness. The upregulation of developmental stochasticity should be manifest in network topology. Likely candidates for a measurable signature of stochasticity include lower segregation and greater integration (31), consistent with higher entropy.
Elevated likelihood of non-normative outcomes. Weaker wiring parameters, giving rise to heightened neurodiversity, should be associated with increased rates of neurodevelopmental conditions, captured by canonical diagnostic groups [e.g., schizophrenia (36)] or transdiagnostic dimensions (26).
Adaptive cognition. Weaker wiring parameters should be associated with adaptive outcomes on the cognitive level, including better performance on tasks that are relevant in harsh environments or under adverse testing conditions (47). To our knowledge, no diffusion imaging cohort has yet collected such measures.
Inducible stochasticity. Children raised in predictable environments later introduced to severe unpredictability should exhibit a shift in wiring parameters over time, depending on the chronicity of that unpredictability.

Limitations.

Our study has limitations in both methods and scope. First, our generative network modeling framework is a blunt approximation of stochasticity in the developing brain. The models used here currently only approximate the topology of binary networks. New models that also modulate connection strength over time, in addition to connection formation, may be better positioned to capture developmental stochasticity (42). Similarly, models that vary rules and/or parameters across space may offer a path to examining the differential contribution of stochasticity to the connectivity of different brain regions. In this case, we would tentatively expect the contribution to be greater in those areas that exhibit greater variability across genetically identical individuals [e.g., association tracts (13)] rather than those that exhibit more heritability. Second, this work does not consider the valence of the environment, which is entangled with but theoretically separate from unpredictability (31). Moreover, we do not deal with meta-predictability or the consistency of unpredictability over time (22). Fine-grained measures of the environment and sophisticated modeling would be necessary to test how these may interact with developmental stochasticity. Fourth, we have focused our analysis on macroscopic neural networks derived from diffusion tensor imaging, which is characterized by certain empirical limitations (48, 49) and cannot capture circuitry at lower scales. Relatedly, in this work, we used a relatively low-resolution 116-node parcellation to model the macroscopic connectome (50). While this likely avoids more type I errors than do higher resolution parcellations—a risk that is particularly relevant when reconstructing binary connectomes–this may have decreased our ability to detect stochasticity. Follow-up studies with higher resolution atlases, if balanced with the increased computational burden and rate of false positives, could afford additional insights—including in regional variations in stochasticity. Finally, the adaptive perspective does not mean to imply that every response to an adverse environment is inherently beneficial. The challenge for future research will be to disentangle inevitable passive harms caused by adversity from what might constitute an adaptive response to navigating adverse environments. Crucially, the lines between these two sets of processes may blur, especially across time—as an adaptation that is beneficial in the medium-term could set in motion long-term harms.

Conclusions

Stochasticity is an underappreciated contributor to developmental outcomes that may be particularly relevant in adverse environments. We present a modeling approach to parse its contributions to the emergence of brain network organization. We believe that this line of investigation could yield critical insights into interindividual variability in general and the influence of the early environment in particular.

Methods

Probabilistic Wiring Equation.

To simulate the formation of brain network connectivity, we employed generative network modeling (23, 24). This model is composed of a wiring probability equation:
pi,j(di,j)η(ki,j)γ.
[2]
The di,j   term represents the cost of a connection between two nodes i and j, approximated using the Euclidean distance between the two nodes. The ki,j term represents how nodes i and j value each other and is set a priori using a topological relationship between the two (also denoted a “wiring rule”). Two wiring parameters, η and γ , respectively, parameterize the costs and value terms, thereby calibrating their relative influence. pi,j reflects the probability of forming a fixed binary connection between nodes i and j. This is proportional to the parametrized multiplication of costs and values.
Previous research has shown that generative models implementing a value term (i.e., wiring rule) that prefers connections between regions with overlapping neighbors, termed homophily, can reliably produce networks with statistics that mirror empirical observations (25, 26, 33, 34, 51). As such, we utilized the matching wiring rule in our analyses—the main homophily wiring rule used in prior work.
Mathematically, matching is defined as the normalized overlap in connectivity between two nodes (24, 25). Specifically, if Γ represents the set of node i’s neighbors, then the matching index is equal to:
mi,j=|Γi/jΓj/i|Γi/jΓj/i|,
[3]
where Γi/j is Γ but with j excluded from the set. If i and j have perfect overlap in their neighborhoods, then mi,j=1 . If the neighborhoods contain no common elements, then mi,j=0 . This is because (the numerator) refers to the intersection of the neighbors (i.e., neighbors in common) while (the denominator) refers to the union of the neighbors (i.e., neighbors in total).
All simulations were generated within a physical space defined by the commonly used AAL116 parcellation scheme (50). The coordinates of node centroids within the AAL116 atlas were used to determine the Euclidean distance between every node combination, which was used to approximate the cost of connections.
A termination criterion must be defined for when the formation of the networks ends. As we do not mirror empirical data for the main part of the study, we set 400 connections as the stop criterion to attain a final density of 3%. This was done so that the core trade-offs could easily be examined within computational limits. Later in the study, when we compare directly to empirical data, we use the number of edges in the empirical CALM dataset networks (Methods: Empirical Prediction and Application) as the stop condition. This gives at an average sample 10% density (mean 668.5 edges; SD 43.6 edges), which aligns with prior work done on the sample (26) and elsewhere (25).

Neonatal Seed Network Generation.

The approach to initializing generative network models (i.e., the seed) that best mimics developmental trajectories is unknown. Three main approaches have been taken in the literature: 1) The selection of edges that are highly consistent across the sample of empirical connectomes (25, 26, 34); 2) No edges at all, thereby initializing the model with a first calculated pi,j   matrix that is equivalent to di,j   matrix (24, 33, 35, 52) and; 3) A theory-driven set of edges [e.g., medio-posterior nodes (23)].
The choice of initial conditions is of great biological relevance. The generative model is thought to reflect activity-dependent interactions between neural assemblies by forming connections between self-similar regions. However, it is widely known that the preliminary scaffold of brain connectivity arises through processes that are largely activity-independent (53). Furthermore, by virtue of wiring early on in development, these regions will have a greater time availability for future wiring therefore are more likely to become hubs later in development [the old-get-richer effect (54)]. We sought to account for the early activity-independent scaffold using a neonatal seed network.
Toward this end, we reconstructed a core rich-club network from data collected from the dHCP. We then used this core neonatal rich club as the initial connectivity matrix for our subsequent simulations. The dHCP sample contains n = 630 neonates (mean postconceptual age = 39.46 wk, SD = 3.58 wk, n = 297 female, n = 343 male) connectomes rendered within the AAL90 parcellation [(55) and Methods: Neuroimaging data and preprocessing). A rich-club topology describes how high-degree nodes tend to be more densely interconnected (in topological binary networks) than would be expected by chance.
To identify highly conserved invariable edges across the n = 685 neonatal connectomes, we considered only those edges which were shared in 70% of the sample (n = 480) (SI Appendix, Fig. S1 A and B). Across the sample, these edges had above-average streamline connectivity (SI Appendix, Fig. S1C).
To assess the interconnectivity between hub regions within a binary brain connectivity network, we used the topological rich-club coefficient φ(k). This quantifies the density of the subgraph comprising nodes with a degree higher than the hub-defining threshold k:
ϕk=2E>kN>k(N>k-1),
[4]
where N>k is the number of nodes with degree >k, and E>k is the number of edges between nodes with degree >k.
Because higher-degree nodes make more connections, it is important to determine whether this value is higher than that expected by change. We therefore compared the rich-club coefficient of the empirical neonatal consensus network to the mean value across a 1,000 randomized null networks, generated by rewiring the edges of the empirical network while retaining the same degree sequence, using the randmio_und function from the Brain Connectivity Toolbox (56), rewiring each edge 50 times per null network. This approach has precedent in the literature (e.g., ref. 52). We thus computed a normalized rich-club coefficient, taking the ratio between the rich-club coefficient in the empirical network and the mean rich-club coefficient in this set of corresponding randomized networks:
ϕnormk=ϕk<ϕrandk>.
[5]
Values of >1 indicate rich-club organization, where high-degree nodes are more densely interconnected. To characterize the statistical significance of the result, we computed a P-value directly from the empirical null distribution of the 1,000 randomized networks, ϕrandk , as a one-sided permutation test. Specifically, the P-value is the ranking position of the empirical rich-club coefficient within the null distribution of rich-club coefficients from the randomization procedure. For example, a Pperm of 0.01 would be equivalent to being in the top 1% of 1,000 null models, which is position 10.
In SI Appendix, Fig. S2A, we plot the rich-club coefficient, normalized rich-club coefficient and Pperm values across increasing degree, or k levels. This highlights that a rich-club topology that is statistically significant (P = 0.018) is found beyond level 22. That is, there is a subgraph of nodes with degree >22 in the neonatal consensus network that are connected to each other more than would be expected by chance. This rich-club network contains 5 nodes: left and right lingual cortex, left precuneus, left occipital mid, and right fusiform cortex. All nodes fully connected to each other (n = 10 edges) (SI Appendix, Fig. S2 B, Left). This seed network generally occupies medio-posterior positions which is thought to be the earliest location of white and grey matter development (23).
To scale these results to adult size, we then placed this equivalent rich club within the adult AAL116 equivalent atlas (SI Appendix, Fig. S2 B, Right). This network was then used as the seed for all simulations. Note that, to prevent data leakage, we did not assess model fits with respect to any of the dHCP data. Instead, we only tested models on a completely independent dataset (see below).
As the seed network consists of 10 edges, and simulations were computed for 400 edges (density = 0.075%), the seed constitutes the first 2.5% and 1.05% of simulated networks in the main analyses vs. empirical analyses, respectively. As seen in SI Appendix, Fig. S2A, a lower-level threshold of degree >17 would also generate a significant rich-club network, but this constituted 133 edges and therefore was deemed too large.

Parameter Space and Repeated Simulations.

We ran the generative models from this seed network at 625 different parameter combinations of η   and γ.   The parameter combinations were selected evenly across a parameter space of 0 ≤ η ≤ 4 and 0 ≤ γ ≤ 1 to provide a 25×25 grid space. This parameter space was chosen as this range best recapitulates the statistics of empirical brain networks using the matching rule (25, 26). At each combination, we ran the simulation 625 times. This led to a total of 390,625 (625 parameter combinations × 625 repetitions) simulations for each protocol.

Experiment 1—Network Outcome Dissimilarity.

For each parameter combination, we computed two measures of dissimilarity between every combination of the produced 625 networks (Fig. 1B). The first dissimilarity measure was a measure of topological dissimilarity. For each network, we calculated five simple global topological measures for each network, using the Brain Connectivity Toolbox (56): 1) Global clustering; 2) Mean betweenness centrality; 3) Total edge length; 4) Global efficiency; and 5) Modularity. These were used because they cover a range of topological features common to brain networks. To ensure that no single measure skewed this topological measurement matrix, we normalized the matrix using the normalize() function in MATLAB 2020b, which scaled all values between 0 and 1 (using the “range” setting). From this normalized topological matrix, we then simply calculated the Euclidean distance between each network in this 5-dimensional space (625 × 625 dissimilarity matrix). As measure of total topological dissimilarity, we simply computed a summation of this Euclidean distance matrix (Fig. 1 B, Top Right). The second dissimilarity measure was a measure of embedding dissimilarity, which calculated the percentage of nonoverlapping connections (i.e., the disconsistency) between all network combinations (Fig. 1 B, Bottom Right). For both measures, large numbers correspond to there being more heterogeneity among network outcomes, and vice versa.

Experiment 2—Timing of Noise Analysis.

To assess the effects of heightened stochasticity in network development on variability in outcomes, we reran each of our models while forming a subset of the connections at random. This was achieved by temporarily setting the probability matrix to zero, erasing any contribution made by the cost and value of connections, and allowing network formation to proceed completely randomly. We injected this noise at one of three stages of development: early, middle, and late. This corresponds to 5 to 10%, 47.5 to 52.5%, and 90 to 95% of network development. We then computed the global topological dissimilarity (Methods Experiment 1—Network Outcome Dissimilarity) of the resulting networks. This enabled us to identify and quantify the contribution of injecting noise at the three points in development to the stochasticity of network outcomes.

Experiment 3—Robustness Analysis.

Network robustness refers to the ability for networks to be resistant to external perturbation. It is evaluated by computing some benchmark measure of network quality before and after perturbation. As previous studies suggest that communication models accurately capture propagation dynamics in empirical brain networks (57, 58), we selected binary network communicability (59) as the benchmark measure:
C=eA.
[6]
Here, A is the symmetrical binary matrix that has been simulated and C is the network communicability within that matrix.
We used two regimes to assess robustness of the networks: i) targeted and ii) random attacks. In the targeted attack regime, nodes within a network are first ranked according to some measure (in our case, the degree of the nodes). Then, nodes are incrementally removed (i.e., attacked) by tuning all their connectivity weights to zero. In the random attack protocol, nodes chosen at random are attacked in the same way. After each node’s connectivity is removed, network communicability was recalculated.
Robustness—or resistance to change—was subsequently measured as the coefficient of a univariate linear model fit to the trajectory of the natural logarithm of the communicability after 25% of total nodes (29 nodes). This indicates the extent to which a network is robust to change in its capacity to support dynamics; the larger in magnitude the negative coefficient, the less robust the network, and vice versa.

Experiment 4—Equifinality Analysis.

To test which factors may contribute to equifinality in topology, we assessed the ability of a supervised machine learning model to successfully distinguish simulations run with differing γ . For each of the previously run 625 simulations, we trained a SVM to distinguish its global statistics from each of the 624 other simulations. Then, we used 10-fold cross-validation to determine the misclassification rate (i.e., how many of the 625 runs were attributed to the wrong parameter combination). In order to determine the contribution of γ to equifinality, we calculated the Pearson correlations between the misclassification rate and the Euclidean distance between the two γ values. To determine the contribution of the intrinsic stochasticity to equifinality, we calculated the Pearson correlation between the mean misclassification rate (across the 624 comparisons) and the topological stochasticity of the simulation (Methods Experiment 1—Network Outcome Dissimilarity).

Empirical Prediction and Application.

To test our theoretical framework, we made empirical predictions about the relationship between SES and brain wiring parameters, which we then tested in a large sample of children.

The CALM cohort.

The data were collected at the CALM, a research clinic at the MRC Cognition and Brain Sciences Unit, University of Cambridge. The study protocol was approved by, and data collection proceeded under the permission of, the local NHS Research Ethics Committee (reference: 13/EE/0157). The sample is designed to be reflective of children at heightened risk of a range of neurodevelopmental difficulties (60) and has sufficient variability within it to establish wiring parameter differences (26). Practitioners working in specialist educational or clinical services in the East of England (UK) were asked to refer children with ongoing problems of “language”, “attention”, “memory”, or “learning/poor school progress”, regardless of the presence or absence of a formal diagnosis. Exclusion criteria included uncorrected problems in vision or hearing, having English as a second language, or having received a causative genetic diagnosis. In addition to 800 children, 200 children recruited from the same schools and neighborhoods who present no such difficulties. A range of measures were collected, including genetic, cognitive and behavioral, and neural data (see ref. 60 for the full assessment protocol). Of the n = 1,000 total children, n = 425 completed some neuroimaging portion of the study, of whom n = 386 had diffusion imaging data (see below for preprocessing details). A strict movement threshold (see below) led to a final sample of n = 357. Of these children, n = 283 were from the pool of 800 children meeting the above CALM criteria, and n = 73 were from the control group.

Neuroimaging data and preprocessing.

MRI data were acquired on a Siemens 3 T Prisma-fit system (Siemens Healthcare) using a 32‐channel quadrature head coil. T1‐weighted volume scans were acquired using a whole brain coverage 3D Magnetization Prepared Rapid Acquisition Gradient Echo sequence acquired using 1 mm isometric image resolution. Echo time was 2.98 ms, and repetition time was 2250 ms. Diffusion scans were acquired using echo‐planar diffusion‐weighted images with an isotropic set of 68 noncollinear directions, using a weighting factor of b = 1,000 s mm−2, interleaved with 4 T2‐weighted (b = 0) volume. Whole brain coverage was obtained with 60 contiguous axial slices and isometric image resolution of 2 mm. Echo time was 90 ms and repetition time was 8,500 ms. We enforced a strict movement threshold of 1mm (estimated through FSL eddy during the diffusion sequence), which led to 29 scans being removed, leaving a final sample of 357 children.
Images were preprocessed using FSL eddy to correct for motion, eddy currents, and field inhomogeneities. Nonlocal means denoising using DiPy v0.11 was performed to increase signal‐to‐noise ratio. Finally, single-shell constrained spherical deconvolution was used to estimate the fiber orientation distribution (61) with a brain mask derived from the T1-weighted image. Whole-brain tractography was then performed using iFOD2 probabilistic tracking (62), to generate 107 streamlines with a maximum length of 250 mm and minimum length of 30 mm. Weights for each streamline were calculated using SIFT2 (63). Streamlines were then mapped to the AAL116 atlas, a cortical parcellation generated using automatic anatomical labeling (50), to generate a connectivity matrix.
We thresholded this connectivity matrix at an absolute threshold of 1,530 streamlines to generate binary connectomes for our analyses to achieve a sample average 10% of density (Methods: Probabilistic wiring equation for details on the number of connections present across the sample). We then simulated the development of each structural connectome using the generative network modeling procedure as outlined above and determined the parameters that best replicated their organization through the Fast Landscape Generation (FLaG) approach described below.

Fitting generative model parameters using FLaG.

A significant shortcoming of generative modeling in large cohort studies is that parameter estimation is computationally burdensome. Here, we use a fast, reliable, and accurate parameter estimation method for connectome generative models called FLaG (38). This method computes multiple landscapes and fits subjects to individual simulations simultaneously. At our sample size of n = 357, this allows for efficient but clear examination of differences. As recommended, we computed k = 50 landscapes to compute the average parameter combination across landscapes per subject.

Analysis of SES.

To measure SES, we used the IMD. This index captures the relative deprivation of an individual’s circumstances by surveying income, employment, education, health, crime, barriers to housing and services, and the quality of the living environment. See SI Appendix, Fig. S3 for the distribution of IMD values in the CALM sample.

Statistical testing.

To examine the relationship between generative model outcomes (model fits and wiring parameters) and SES, we split subjects into high and low SES groups using the sample median IMD. To determine whether there was evidence of population-mean differences between the two groups, we conducted a two-sample t test at a significance threshold of P < 0.05. To determine the effect size, we calculated Cohen’s d. To verify the significance of our statistical findings, we also undertook a permutation test by randomly assigning the group allocation of each subject 1,000 times and computing the absolute mean difference between randomized groups on each iteration to generate a null distribution. We then compared the empirical absolute mean difference to the null distribution, generating a permuted P-value by observing where the empirical value ranked in the null distribution. This generated P-values comparable to those obtained with the original two-sample t tests: η (Pperm = 0.001), γ (Pperm = 0.566), model fits (Pperm = 0.024).

Data, Materials, and Software Availability

Results were generated using code written in MATLAB 2020b, with simulations conducted on compute clusters for parallelization. Code is available at https://github.com/DanAkarca/AdaptiveStochasticity (64). Requests can be made to access the empirical dataset supporting this study (the Centre for Attention Learning and Memory) from a data download portal found at https://portal.ide-cam.org.uk/overview/1158 (65). Simulated data are available at https://osf.io/hsc87/ (66).

Acknowledgments

This publication is based on research supported by the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under grant TWCF-2022-30510. D. Akarca and D. Astle are supported by the John S. McDonnell Foundation Opportunity Award and the Medical Research Council, under grant MC-A0606-5PQ41. Sofia Carozza is supported by the Cambridge Trust. All research at the Department of Psychiatry in the University of Cambridge is supported by the National Institute for Health and Care Research Cambridge Biomedical Research Centre (NIHR203312) and the NIHR Applied Research Collaboration East of England. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care. We would like to thank Alexa Mousley for providing seed networks derived from the dHCP.

Author contributions

S.C., D. Akarca, and D. Astle designed research; S.C. and D. Akarca performed research; S.C. and D. Akarca contributed new reagents/analytic tools; S.C. and D. Akarca analyzed data; and S.C., D. Akarca, and D. Astle wrote the paper.

Competing interests

The authors declare no competing interest.

Supporting Information

Appendix 01 (PDF)

References

1
E. Armstrong-Carter, J. Wertz, B. W. Domingue, Genetics and child development: Recent advances and their implications for developmental research. Child Dev. Perspect. 15, 57–64 (2021).
2
A. G. Jansen, S. E. Mous, T. White, D. Posthuma, T. J. C. Polderman, What twin studies tell us about the heritability of brain development, morphology, and function: A review. Neuropsychol. Rev. 25, 27–46 (2015).
3
G. Vogt, Stochastic developmental variation, an epigenetic source of phenotypic diversity with far-reaching biological consequences. J. Biosci. 40, 159–204 (2015).
4
P. G. H. Clarke, The limits of brain determinacy. Proc. R. Soc. B 279, 1665 (2012).
5
T. Heams, Randomness in biology. Math. Struct. Comput. Sci. 24, e240308 (2014).
6
K. Honegger, B. de Bivort, Stochasticity, individuality and behavior. Curr. Biol. 28, R8–R12 (2018).
7
A. Raj, A. van Oudenaarden, Nature, nurture, or chance: Stochastic gene expression and its consequences. Cell 135, 216–226 (2008).
8
S. Maskery, T. Shinbrot, Deterministic and stochastic elements of axonal guidance. Annu. Rev. Biomed. Eng. 7, 187–221 (2005).
9
R. Borisyuk, T. Cooke, A. Roberts, Stochasticity and functionality of neural systems: Mathematical modelling of axon growth in the spinal cord of tadpole. BioSystems 93, 101–114 (2008).
10
A. A. Faisal, L. P. J. Selen, D. M. Wolpert, Noise in the nervous system. Nat. Rev. Neurosci. 9, 292–303 (2008).
11
D. A. Rusakov, L. P. Savtchenko, P. E. Latham, Noisy synaptic conductance: Bug or a feature? Trends Neurosci. 43, 363–372 (2020).
12
B. Zhao et al., Common genetic variation influencing human white matter microstructure. Science 372, eabf3736 (2021).
13
S. J. Lee et al., Quantitative tract-based white matter heritability in twin neonates. Neuroimage 111, 123–135 (2015).
14
T. E. Faust, G. Gunner, D. P. Schafer, Mechanisms governing activity-dependent synaptic pruning in the developing mammalian CNS. Nat. Rev. Neurosci. 22, 657–673 (2021).
15
R. G. Almeida, D. A. Lyons, On myelinated axon plasticity and neuronal circuit formation and function. J. Neurosci. 37, 10023–10034 (2017).
16
B. J. Ellis, A. J. Figueredo, B. H. Brumbach, G. L. Schlomer, Fundamental dimensions of environmental risk: The impact of harsh versus unpredictable environments on the evolution and development of life history strategies. Hum. Nat. 20, 204–268 (2009).
17
E. P. Davis et al., Exposure to unpredictable maternal sensory signals influences cognitive development across species. Proc. Natl. Acad. Sci. U.S.A. 114, 10390–10395 (2017).
18
J. Molet et al., Fragmentation and high entropy of neonatal experience predict adolescent emotional outcome. Transl. Psychiatry 6, e702–e702 (2016).
19
E. S. Young, V. Griskevicius, J. A. Simpson, T. E. A. Waters, Can an unpredictable childhood environment enhance working memory? Testing the sensitized-specialization hypothesis J. Pers. Soc. Psychol. 114, 891–908 (2018).
20
L. T. Ross, C. O. Hood, S. D. Short, Unpredictability and symptoms of depression and anxiety. J. Soc. Clin. Psychol. 35, 371–385 (2016).
21
J. R. Doom, A. A. Vanzomeren-Dohm, J. A. Simpson, “Early unpredictability predicts increased adolescent externalizing behaviors and substance use: A life history perspective” in Development and Psychopathology, D. Cicchetti, Ed. (Cambridge University Press, 2016), vol. 28, pp. 1505–1516.
22
E. S. Young, W. E. Frankenhuis, B. J. Ellis, Theory and measurement of environmental unpredictability. Evol. Hum. Behav. 41, 550–556 (2020).
23
M. Kaiser, C. C. Hilgetag, Modelling the development of cortical systems networks. Neurocomputing 58–60, 297–302 (2004).
24
P. E. Vértes et al., Simple models of human brain functional networks. Proc. Natl. Acad. Sci. U.S.A. 109, 5868–5873 (2012).
25
R. F. Betzel et al., Generative models of the human connectome. Neuroimage 124, 1054–1064 (2016).
26
D. Akarca et al., A generative network model of neurodevelopmental diversity in structural brain organization. Nat. Commun. 121, 1–18 (2021).
27
T. Zhao et al., Age-related changes in the topological organization of the white matter structural connectome across the human lifespan. Hum. Brain Mapp. 36, 3777–3792 (2015).
28
M. G. Puxeddu et al., The modular organization of brain cortical connectivity across the human lifespan. Neuroimage 218, 116974 (2020).
29
L. Riedel, M. P. van den Heuvel, S. Markett, Trajectory of rich club properties in structural brain networks. Hum. Brain Mapp. 43, 4239–4253 (2022).
30
D. Scheinost et al., Prenatal stress alters amygdala functional connectivity in preterm neonates. Neuroimage (Amst) 12, 381 (2016).
31
U. A. Tooley, D. S. Bassett, A. P. Mackey, Environmental influences on the pace of brain development. Nat. Rev. Neurosci. 22, 372–384 (2021).
32
P. E. Vértes, E. T. Bullmore, Annual research review: Growth connectomics–the organization and reorganization of brain networks during normal and abnormal development. J. Child Psychol. Psychiatry. 56, 299–320 (2015).
33
D. Akarca et al., Homophilic wiring principles underpin neuronal network topology in vitro. bioRxiv [Preprint] (2022). https://doi.org/10.1101/2022.03.09.483605 (Accessed 10 April 2023).
34
S. Carozza et al., Early adversity changes the economic conditions of mouse structural brain network organization. Dev. Psychobiol. 65, e22405 (2023).
35
S. Oldham et al., Modeling spatial, developmental, physiological, and topological constraints on human brain connectivity. Sci. Adv. 8, eabm6127 (2022).
36
X. Zhang et al., Generative network models of altered structural brain connectivity in schizophrenia. Neuroimage 225, 117510 (2021).
37
A. Arnatkeviciute et al., Genetic influences on hub connectivity of the human connectome. Nat. Commun. 121, 1–14 (2021).
38
Y. Liu et al., Parameter estimation for connectome generative models: Accuracy, reliability, and a fast parameter fitting method. Neuroimage 270, 119962 (2023).
39
D. Cicchetti, F. A. Rogosch, Equifinality and multifinality in developmental psychopathology. Dev. Psychopathol. 8, 597–600 (1996).
40
P. B. Badcock, Evolutionary systems theory: A unifying meta-theory of psychological science. Rev. Gen. Psychol. 16, 10–23 (2012).
41
P. R. Hiesinger, B. A. Hassan, The evolution of variability and robustness in neural development. Trends Neurosci. 41, 577–586 (2018).
42
D. Akarca et al., A weighted generative model of the human connectome. bioRxiv [Preprint] (2023). https://doi.org/10.1101/2023.06.23.546237 (Accessed 10 April 2023).
43
D. V. M. Bishop, Cognitive neuropsychology and developmental disorders: Uncomfortable bedfellows. Q. J. Exp. Psychol. A. 50, 899–923 (1997).
44
D. Scheinost et al., Does prenatal stress alter the developing connectome? Pediatr. Res. 81, 214–226 (2017).
45
E. Fitzgerald, K. Hor, A. J. Drake, Maternal influences on fetal brain development: The role of nutrition, infection and stress, and the potential for intergenerational consequences. Early Hum. Dev. 150, 105190 (2020).
46
A. DiMartino et al., Unraveling the miswired connectome: A developmental perspective. Neuron 83, 1335–1353 (2014).
47
W. E. Frankenhuis, D. Nettle, The strengths of people in poverty. Curr. Dir. Psychol. Sci. 29, 16–21 (2020).
48
D. K. Jones, Challenges and limitations of quantifying brain connectivity in vivo with diffusion MRI. Imaging Med. 2, 341–355 (2010).
49
A. Zalesky et al., Connectome sensitivity or specificity: Which is more important? Neuroimage 142, 407–420 (2016).
50
N. Tzourio-Mazoyer et al., Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15, 273–289 (2002).
51
S. Oldham, A. Fornito, The development of brain network hubs. Dev. Cogn. Neurosci. 36, 100607 (2019).
52
A. Arnatkeviciute, B. D. Fulcher, M. A. Bellgrove, A. Fornito, Where the genome meets the connectome: Understanding how genes shape human brain connectivity. NeuroImage 244, 118570 (2021).
53
A. L. Kolodkin, M. Tessier-Lavigne, Mechanisms and molecules of neuronal wiring: A primer. Cold Spring Harb. Perspect. Biol. 3, 1–14 (2011).
54
M. Kaiser, Mechanisms of connectome development. Trends Cogn. Sci. 21, 703–717 (2017).
55
G. Ball et al., Rich-club organization of the newborn human brain. Proc. Natl. Acad. Sci. U.S.A. 111, 7456–7461 (2014).
56
M. Rubinov, O. Sporns, Complex network measures of brain connectivity: Uses and interpretations. Neuroimage 52, 1059–1069 (2010).
57
A. Avena-Koenigsberger, B. Misic, O. Sporns, Communication dynamics in complex brain networks. Nat. Rev. Neurosci. 19, 17–33 (2018).
58
C. Seguin, L. S. Mansour, O. Sporns, A. Zalesky, F. Calamante, Network communication models narrow the gap between the modular organization of structural and functional brain networks. Neuroimage 257, 119323 (2022).
59
J. J. Crofts, D. J. Higham, A weighted communicability measure applied to complex brain networks. J. R. Soc. Interface 6, 411–414 (2009).
60
J. Holmes, A. Bryant, S. E. Gathercole, Protocol for a transdiagnostic study of children with problems of attention, learning and memory (CALM). BMC Pediatr. 19, 10 (2019).
61
T. Dhollander, D. Raffelt, A. Connelly, “Unsupervised 3-tissue response function estimation from single-shell or multi-shell diffusion MR data without a co-registered T1 image” in ISMRM Workshop on Breaking the Barriers of Diffusion MRI (2016), p. 5.
62
J.-D. Tournier, F. Calamante, A. Connellly, “Improved probabilistic streamlines tractography by 2nd order integration over fibre orientation distributions” in Proceedings of the International Society for Magnetic Resonance in Medicine (2010), p. 1670.
63
R. E. Smith, J. D. Tournier, F. Calamante, A. Connelly, SIFT2: Enabling dense quantitative assessment of brain white matter connectivity using streamlines tractography. Neuroimage 119, 338–351 (2015).
64
S. Carozza, D. Akarca, D. E. Astle, Code repository for “The adaptive stochasticity hypothesis: Modeling equifinality, multifinality, and adaptation to adversity”. Github. https://github.com/DanAkarca/AdaptiveStochasticity/. Deposited 6 September 2023.
65
The CALM Team, Centre for Attention Learning and Memory (CALM). Data Access Portal. CAM:IDE. https://portal.ide-cam.org.uk/overview/1158. Deposited 17 June 2023.
66
S. Carozza, D. Akarca, D. E. Astle, The adaptive stochasticity hypothesis: Modeling equifinality, multifinality, and adaptation to adversity. OSF. https://osf.io/hsc87/. Deposited 6 September 2023.

Information & Authors

Information

Published in

The cover image for PNAS Vol.120; No.42
Proceedings of the National Academy of Sciences
Vol. 120 | No. 42
October 17, 2023
PubMed: 37816058

Classifications

Data, Materials, and Software Availability

Results were generated using code written in MATLAB 2020b, with simulations conducted on compute clusters for parallelization. Code is available at https://github.com/DanAkarca/AdaptiveStochasticity (64). Requests can be made to access the empirical dataset supporting this study (the Centre for Attention Learning and Memory) from a data download portal found at https://portal.ide-cam.org.uk/overview/1158 (65). Simulated data are available at https://osf.io/hsc87/ (66).

Submission history

Received: May 4, 2023
Accepted: August 25, 2023
Published online: October 10, 2023
Published in issue: October 17, 2023

Keywords

  1. brain development
  2. stochasticity
  3. generative modeling
  4. early adversity
  5. structural connectome

Acknowledgments

This publication is based on research supported by the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under grant TWCF-2022-30510. D. Akarca and D. Astle are supported by the John S. McDonnell Foundation Opportunity Award and the Medical Research Council, under grant MC-A0606-5PQ41. Sofia Carozza is supported by the Cambridge Trust. All research at the Department of Psychiatry in the University of Cambridge is supported by the National Institute for Health and Care Research Cambridge Biomedical Research Centre (NIHR203312) and the NIHR Applied Research Collaboration East of England. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care. We would like to thank Alexa Mousley for providing seed networks derived from the dHCP.
Author contributions
S.C., D. Akarca, and D. Astle designed research; S.C. and D. Akarca performed research; S.C. and D. Akarca contributed new reagents/analytic tools; S.C. and D. Akarca analyzed data; and S.C., D. Akarca, and D. Astle wrote the paper.
Competing interests
The authors declare no competing interest.

Notes

This article is a PNAS Direct Submission. A.Z. is a guest editor invited by the Editorial Board.

Authors

Affiliations

Sofia Carozza1 [email protected]
Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
Department of Neurology, Harvard Medical School, Boston, MA 02115
Department of Neurology, Brigham and Women’s Hospital, Boston, MA 02115
Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
Duncan Astle
Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
Department of Psychiatry, University of Cambridge, Cambridge CB2 0SZ, United Kingdom

Notes

1
To whom correspondence may be addressed. Email: [email protected] or [email protected].

Metrics & Citations

Metrics

Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.


Citation statements




Altmetrics

Citations

Export the article citation data by selecting a format from the list below and clicking Export.

Cited by

    Loading...

    View Options

    View options

    PDF format

    Download this article as a PDF file

    DOWNLOAD PDF

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Personal login Institutional Login

    Recommend to a librarian

    Recommend PNAS to a Librarian

    Purchase options

    Purchase this article to access the full text.

    Single Article Purchase

    The adaptive stochasticity hypothesis: Modeling equifinality, multifinality, and adaptation to adversity
    Proceedings of the National Academy of Sciences
    • Vol. 120
    • No. 42

    Figures

    Tables

    Media

    Share

    Share

    Share article link

    Share on social media