Adult neurogenesis acts as a neural regularizer

Edited by Karl Deisseroth, Stanford University, Stanford, CA; received April 17, 2022; accepted September 11, 2022
November 2, 2022
119 (45) e2206704119


In deep neural networks, various forms of noise injection are used as regularization techniques to prevent overfitting and promote generalization on unseen test data. Here, we were interested in whether adult neurogenesis—the lifelong production of new neurons in the hippocampus—might similarly function as a regularizer in the brain. We explored this question computationally, by implementing a neurogenesis-like process within a convolutional neural network trained on a category learning task. We found that neurogenesis regularization was at least as effective as conventional regularizers (e.g., dropout) in improving model performance. These results suggest that optimal levels of neurogenesis may improve memory-guided decision making by promoting the formation of generalizable memories that can be applied in a broader range of circumstances.


New neurons are continuously generated in the subgranular zone of the dentate gyrus throughout adulthood. These new neurons gradually integrate into hippocampal circuits, forming new naive synapses. Viewed from this perspective, these new neurons may represent a significant source of “wiring” noise in hippocampal networks. In machine learning, such noise injection is commonly used as a regularization technique. Regularization techniques help prevent overfitting training data and allow models to generalize learning to new, unseen data. Using a computational modeling approach, here we ask whether a neurogenesis-like process similarly acts as a regularizer, facilitating generalization in a category learning task. In a convolutional neural network (CNN) trained on the CIFAR-10 object recognition dataset, we modeled neurogenesis as a replacement/turnover mechanism, where weights for a randomly chosen small subset of hidden layer neurons were reinitialized to new values as the model learned to categorize 10 different classes of objects. We found that neurogenesis enhanced generalization on unseen test data compared to networks with no neurogenesis. Moreover, neurogenic networks either outperformed or performed similarly to networks with conventional noise injection (i.e., dropout, weight decay, and neural noise). These results suggest that neurogenesis can enhance generalization in hippocampal learning through noise injection, expanding on the roles that neurogenesis may have in cognition.
Noise reflects random or unpredictable fluctuations that are not part of a signal (1). Within the brain there are multiple sources of noise, including processes at the cellular (e.g., protein production and degradation), electrical (e.g., membrane potential), or synaptic (e.g., vesicular release) levels that collectively impact the probability and timing of action potentials (1, 2). While neural noise is often considered an obstacle in extracting relevant information from the brain’s output activity, optimal levels of neural noise (encapsulated in a broad range of phenomena termed stochastic facilitation) (2) may enhance information transmission and behavior.
Similarly, in machine learning noise can also be used to achieve better performance. One of the most common examples is that the addition of some optimized level of noise enhances a model’s ability to avoid overfitting and enhances generalization (i.e., avoiding the memorization of training data that is not beneficial to the transfer of learning to unseen data) (3). This process of preventing overfitting in order to enhance generalization on unseen data is termed regularization. We, and others, have suggested that neural noise may be one such strategy by which the brain performs regularization to better extract the statistical regularities of our experiences (4, 5).
In the hippocampus, a potential source of neural noise is ongoing neurogenesis. Asymmetric division of neural precursor cells in the subgranular zone gives rise to newborn neurons that synaptically integrate into hippocampal networks throughout life (69). This process may be considered a form of noise injection in two respects. First, these new neurons are naive and therefore do not encode any aspects of past experience. Second, their integration gradually reconfigures hippocampal networks. Therefore, we hypothesize that this form of “wiring” noise may function to regularize learning in the hippocampus, and we explore these ideas computationally using a category learning task.
To study whether neurogenesis can act as a regularizer, we implemented neurogenesis in hidden layers of a traditional deep learning architecture (i.e., a convolutional neural network [CNN]). In adult rodents, loss of developmentally generated granule cells is balanced by the addition of new neurons (10, 11). Therefore, we modeled neurogenesis as a “replacement/turnover” mechanism, where a randomly chosen small subset of neurons in the middle layer is “turned over” such that their input and output weights are reinitialized to new values (whereas connections of mature neurons remain the same). This turnover/replacement model ensures that the size of the network remains constant. From a computational perspective, this controls for the fact that networks of different sizes can perform very differently (since larger networks have more tunable parameters). From a biological perspective, this mimics the rodent hippocampus where neurogenesis produces negligible net growth (i.e., increase in total number of granule cells) during adulthood (11). While a deep learning model with neurogenesis has been previously developed, it examined the impact of new neuron addition on memory interference (12) and did not explicitly evaluate the role of neurogenesis in generalization.


Neurogenesis Improves Generalization in CNNs.

We implemented neurogenesis in a CNN trained and tested on CIFAR-10 dataset (13). CIFAR-10 consists of 60,000 32 × 32 pixel RGB (red, green, blue) images in 10 different classes representing airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. The CNN consists of 64, 64, and 128 filters, respectively (see Methods for more information), followed by three fully connected layers (Fig. 1 A and B). In order to model neurogenesis, weights of a randomly selected subpopulation of neurons in the middle layer of the fully connected layers were reinitialized continuously during training, with multiple turnover events. Following hyperparameter tuning (SI Appendix, Fig. 1A), we found that turning over 3.2% of neurons every 640 minibatch updates to the model had the greatest impact on performance of the network (Fig. 1C). Neurogenesis was restricted to only one layer, mimicking the occurrence of neurogenesis only in the dentate gyrus layer of the hippocampus. We found that neurogenesis in the second fully connected output layer of the network performed the best during hyperparameter tuning (SI Appendix, Fig. 1B), and therefore we used this configuration for all experiments. The performance of neural networks is also dependent on initialization (with some random initializations outperforming others even after undergoing the same training process). To control for potential differences in initialization states between groups, each network initialization was copied and these identical versions were used in the control (no neurogenesis) and experimental (neurogenesis) networks, respectively.
Fig. 1.
Implementing neurogenesis in convolutional neural networks. (A) Illustration of the CNN used in these experiments. (B) A schematic illustrating how replacement/turnover neurogenesis was implemented. (C) Illustration of how the training with neurogenesis occurs and testing without any neurogenesis events.
We found that neurogenesis improved network performance on unseen test data. Whereas test accuracy for the control network was 74.36 ± 0.16%, networks trained with neurogenesis had a test score of 76.20 ± 0.20% (Fig. 2A). This improvement in performance with neurogenesis did not depend on initialization states. In particular, the performance-enhancing effects of neurogenesis were not limited to low-scoring variants. Higher-scoring variants also benefited from neurogenesis, indicating that neurogenesis can improve generalization beyond the top scores that a static, nonneurogenic network can achieve (Fig. 2B). Control and neurogenesis networks made few miscategorization errors, but when they did occur, both networks typically made the same type of errors (e.g., confusing cats and dogs) (SI Appendix, Fig. 2).
Fig. 2.
Neurogenesis improves generalization in CNNs. (A) Box plot of test accuracy of control and neurogenesis networks after training; t test: t19 = 7.00, P = 1.1 × 10−6. (B) Violin plots of the distribution of scores for the lowest scoring (Left) and highest scoring (Right) halves from each group: control and neurogenesis, from (A); t test: low scoring t9 = 8.22, P = 1.8 × 10−5; high scoring t9 = 4.45, P = 1.5 × 10−3. (C) An illustration of how we implemented enhanced excitability of new neurons in a neurogenic neural network by multiplying the activations of a new neuron by an excitability factor, c. (D) Boxplot of test accuracies of control, neurogenic, and neurogenic + excitability CNNs; ANOVA: F2,57 = 19.01, P = 4.7 × 10−7; Tukey’s HSD: neurogenesis, neurogenesis excite > control, P < 0.01. (E) Box plot of the training accuracy of the control and neurogenesis groups at the end of training; t test: t19 = 4.94, P = 9.0 × 10−5. (F) Plot of validation loss across training. *P value below 0.01; n.s. = not significant; error bars are the SE (SEM).
Adult-generated neurons are more excitable than their developmentally generated counterparts, with excitability peaking between 4 and 8 wk of cell age (1418). To address whether elevated excitability of new neurons might further promote regularization and improve the performance of neurogenic networks, we increased the activation of new neurons by 30% during each forward pass during training (SI Appendix, Fig. 1C and Fig. 2C). To reflect the transient nature of these changes, following each round of new neuron turnover, excitability of previously turned over neurons returned to baseline. As before, we found that neurogenic networks outperformed our control networks (i.e., nonneurogenic networks) on held-back test data. However, incorporating excitability into the neurogenic CNN did not further improve performance (Fig. 2D).

Neurogenesis Improves CNN Performance via Regularization.

The improved performance on held-back test data might reflect a more powerful model in the neurogenesis group, rather than a regularization effect. In other words, it could be that learning in general is simply better in the neurogenesis networks, in the same way that networks with more layers tend to learn better. Indeed, in adult rodent studies interventions that elevate hippocampal neurogenesis facilitate learning in many (19, 20), but not necessarily all (21), situations. Similarly, in artificial neural networks, implementation of neurogenesis may improve learning in general (2224), rather than having a specific effect on generalization. To assess this, we compared training accuracy of the control and neurogenic networks, anticipating that training accuracy would be the same, or even reduced, in the neurogenesis network should improved performance be due to regularization. We found that the neurogenic networks do not improve training accuracy, and in fact, they sacrifice training accuracy in order to achieve the previously observed higher test accuracy on held-back data. Neurogenic networks achieve lower validation loss, but higher training loss relative to control, suggesting that the enhancement in performance is indeed due to a regularization effect (Fig. 2 E and F).

Neurogenesis Regularization Achieves a Similar Level of Improvement to Dropout.

We next compared neurogenesis regularization to conventional regularization methods, including dropout, weight decay, and neural noise (Fig. 3 AC). Dropout involves the stochastic silencing of a subset of units during each forward pass (25). Weight decay involves adding a small penalty to the loss function that penalizes large weights, thus resulting in an overall decay of larger weights (26). Neural noise involves the addition of Gaussian noise to all the activations during each forward pass (27). After identifying the optimal parameter values for each of these regularization methods, we compared the performance of neurogenic networks to networks regularized with these other methods. We found that neurogenic CNNs performed similarly to dropout and outperformed weight decay and neural noise (Fig. 3D).
Fig. 3.
Illustration of regularization methods. (A) Dropout: A random subset of neurons and their weights are inactivated during a given forward pass. (B) Weight decay consists of adding a small penalty to the loss function that penalizes large weights, thus resulting in an overall decay of larger weights. (C) Neural noise: Gaussian noise is added to the activations of a layer. (D) Box plot of the test accuracy of neurogenesis compared to other regularization methods; ANOVA: F4,95 = 16.54, P = 2.5 × 10−10; Tukey’s HSD: neurogenesis vs. control P < 0.01, neurogenesis vs. dropout P > 0.01, neurogenesis vs. weight decay P < 0.01, neurogenesis vs. neural noise P < 0.01. (E) Heatmap of z scores of test performance in networks with combined regularization methods relative to neurogenesis-only networks. (F) Plot of neurogenesis and dropout combined using lower parameter values of dropout (0.1) and neurogenesis (turnover every 1,000 updates); ANOVA: F2,57 = 65.69, P = 1.6 × 10−15; Tukey’s HSD: control vs. neurogenesis P < 0.01, control vs. neurogenesis + dropout P < 0.01, neurogenesis vs. neurogenesis + dropout P > 0.01. *P value below 0.01; n.s. = not significant; error bars are the SEM.
Mechanistically, there are similarities between replacement/turnover neurogenesis and dropout, with both involving information loss at each update. In dropout, a subpopulation of neurons is transiently silenced, but connection weights are otherwise maintained. In contrast, replacement/turnover neurogenesis involves resetting the weights of a subpopulation of neurons. Interestingly, the optimal size of these subpopulations differs markedly for dropout vs. neurogenesis. Optimal performance occurred when dropout was implemented in a subpopulation that corresponded to ∼20% of hidden layer neurons, whereas for neurogenesis, optimal performance was achieved when the turnover subpopulation was ∼3.2% of hidden layer neurons.
We next asked whether combining regularization techniques might further enhance performance. Surprisingly, neurogenesis, when combined with dropout, weight decay, or neural noise, consistently reduced the performance (Fig. 3E). This suggests that the amount of noise injected by neurogenesis may be suboptimal when combined with other regularization techniques. Consistent with this idea, performance improved when the amount of noise injection for each regularization method was reduced. However, even using these reduced parameter values did not enhance performance beyond that of neurogenesis alone, suggesting that there is a ceiling effect beyond which performance cannot be improved further (Fig. 3F and SI Appendix, Fig. 3).

Targeted Neurogenesis in CNNs.

In our model, neurogenesis occurs in randomly selected subpopulations of middle layer neurons. However, the integration of new neurons might be nonrandom in nature. For example, in rodents there is evidence for neurogenesis-dependent refinement of synaptic connections in the hippocampus, with the integration of new neurons leading to the elimination of less active synaptic connections (28). Here we explored whether implementation of a similarly targeted turnover mechanism during training would further enhance performance of our neurogenic networks. Such an approach has previously been explored using dropout regularization, and, in this case, targeting dropout to neurons that were likely to be less important for the task (i.e., neurons with the lowest L1 norm values or total weights) led to significant improvement in test performance (29). Using a similar strategy, here we ranked neurons by their L1 norm values at every turnover event, and reinitialized weights of the bottom 3.2% ranking neurons. As a positive control, we reinitialized the weights of the top 3.2% ranking neurons in a separate experiment (Fig. 4A).
Fig. 4.
Targeted neurogenesis does not change network performance. (A) An illustration of targeted neurogenesis in a hidden layer. The neurons are ranked in order of input weight strengths, and a proportion of the lowest-ranked (low importance) or highest-ranked (high importance) neurons are targeted for neurogenesis (i.e., having their weights reset), or there is no targeting (random). (B) A boxplot of model test accuracy in control, random neurogenesis, targeted neurogenesis of high importance neurons (positive control), and targeted neurogenesis of low importance neurons; ANOVA: F3,96 = 46.30 P = 1.3 × 10−18; Tukey’s HSD: (control vs. random) P < 0.01, (control vs. high importance) P < 0.01 (random vs. low importance) P > 0.01. *P value below 0.01; n.s. = not significant; error bars are the SEM.
Targeting top-ranked neurons impaired generalization performance below that of the control (no neurogenesis) group. This is expected since these neurons are assumed to be more valuable to the performance of the network. However, targeting the bottom-ranked neurons did not further enhance performance (i.e., compared to random reinitializations affecting the same size subpopulation) (Fig. 4B). While this finding indicates that targeting turnover neurogenesis to the bottom L1 norm ranking neurons does not further improve generalization, we recognize that there may be alternate ways to determine “importance” of units in a neurogenic network, and it is possible that targeting these might produce performance improvements.

Neurogenesis Improves Generalization but Increases Reliance on Individual Neurons.

The degree to which networks depend on single units vs. more distributed, population codes influences generalization (30). Typically, networks that generalize well on held back, unseen test data depend on distributed population codes, rather than on a small subset of units. Moreover, because of their distributed nature, these types of networks tend to be more resilient to random ablation of units (30). Conversely, networks that generalize poorly tend to depend on a small subset of units, rather than on distributed codes, and these networks are typically less resilient to random ablation of units.
We performed a similar analysis here to assess whether neurogenesis improves generalization by reducing the dependence of networks on a small subset of single units. To evaluate the reliance on single units vs. distributed codes, we sequentially ablated random units following training and compared pre- and postablation performance (accuracy was normalized to preablation performance) (Fig. 5A). Surprisingly, we found that neurogenic networks were less resilient. Ablation of a smaller proportion of units was sufficient to decrease performance (as reflected by the leftward shift of the curve) (Fig. 5B). This indicates that neurogenesis improves generalization through a mechanism other than reducing the network’s reliance on individual neurons. We also measured the class selectivity for each neuron using the metric described in Morcos et al. (30) to test whether there were fewer highly class selective neurons. We found that the distribution of class selectivities of neurons was shifted to the left (on average neurons were less selective for class) in the neurogenesis group compared to dropout or control networks (Fig. 5C). To assess whether the reduced resilience to ablation of neurogenic networks translates to increased vulnerability to neurogenesis posttraining, we trained networks as before, with and without neurogenesis, and tested performance with and without posttraining neurogenesis. One turnover event (replacing eight neurons) after training/at test time was used. We found that posttraining neurogenesis reduced performance of the neurogenic networks, but not the control networks, further demonstrating a specific sensitivity to perturbation in neurogenic networks (Fig. 5D).
Fig. 5.
Networks with neurogenesis are less robust to ablation. (A) Illustration of ablation experiments. (B) Plot of the mean normalized accuracy across 20 repeats as progressively more neurons are ablated from the network. (C) Density plot of the class selectivity of neurons in the second hidden layer in control, neurogenic, and dropout networks. (D) Boxplot of test accuracy of networks that were trained with and without neurogenesis and then tested with and without new neurons added posttraining; repeated-measures ANOVA: training × posttraining interaction, F1,19 = 58.68, P < 0.01; Tukey’s HSD: (neurogenesis/control vs. neurogenesis/neurogenesis) P < 0.01. *P value below 0.01; error bars are the SEM.


As adult-generated neurons integrate into hippocampal circuits they form naive synapses and therefore can be thought of as a form of wiring noise. Deep learning has found that various forms of noise injection can reduce overfitting on training data and, as a result, enhance generalization in deep neural networks. We therefore hypothesized that neurogenesis-mediated rewiring would similarly have a regularization effect, i.e., prevent memorization of training data and favor a more flexible, generalized memory that can be applied in a broader range of circumstances. We explored this hypothesis computationally, implementing neurogenesis in a hidden layer of a CNN trained on the CIFAR-10 object recognition task. Consistent with our hypothesis, neurogenesis acted as a regularizer, improving generalization on the test (held out) data. These data identify neurogenesis as one potential mechanism the brain can use to regularize neural circuits. Other potential mechanisms include those that resemble dropout (31) and weight decay (i.e., synaptic downscaling) (32).
In our model, we implemented neurogenesis as a replacement/turnover mechanism, where input and output synaptic weights associated with small subsets of middle layer neurons were reinitialized through training. We used this approach since neurogenesis in the rodent hippocampus similarly involves replacement of mature, developmentally generated neurons with immature, adult-generated neurons, and therefore turnover of associated synaptic weights. For instance, in the rat adult dentate gyrus there is significant loss of mature granule cells that were born soon after birth (10). Since there is negligible overall increase in the size of the dentate granule cell layer during adulthood (33), this implies that this loss of developmentally generated granule cells in adulthood is balanced by new neuron addition (11). Viewed in this way, neurogenesis regularization can be thought of as a form of wiring noise that incrementally alters network connectivity patterns, without impacting overall network size.
We found that turnover of only a small fraction of all neurons (<1%) was sufficient to improve generalization (e.g., SI Appendix, Fig. 1A), similar to previous observations that low rates of neurogenesis significantly impact network function in more biologically realistic artificial neural networks (34) While caution is warranted when generalizing from abstracted models such as these, nonetheless these turnover rates mirror low rates of turnover observed in humans and rodents (3537). undefined
In the CNN, we explored whether targeting neurogenesis regularization to less important neurons was beneficial. Using L1 norm values as a metric of importance (29), we assessed generalization following three types of neurogenesis regularization: When neurogenesis regularization was applied randomly to all middle layer units vs. when neurogenesis regularization was either restricted to the least (i.e., neurons with lowest L1 norm values or weakest weights) or the most (i.e., neurons with highest L1 norm values or strongest weights) important middle layer neurons. As expected, targeting neurons with the highest L1 norm scores decreased generalization scores below control (no neurogenesis) levels, consistent with the idea that neurons with the highest L1 norm values are indeed more important for the task. However, targeting neurogenesis regularization to neurons with the lowest L1 norm scores did not improve generalization beyond networks with nontargeted or random neurogenesis regularization.
In the rodent brain, there is some evidence that the integration of new neurons is a nonrandom process, with the integration of new neurons leading to the pruning of less active synaptic connections (28). While our current analyses suggest that such a targeted mechanism may not be critical for improving generalization, nonetheless it is possible that using the L1 norm of each unit may not be the most suitable metric of importance. Alternatively, the lottery ticket hypothesis suggests that within these networks there exist sparse subnetworks that, when trained in isolation, can achieve the same final performance accuracy of the entire network but in the same or fewer training epochs (38). Interestingly, the rates at which these lottery-winning subnetwork neurons change weights is much higher compared to other neurons. Therefore, one possibility would be to target neurogenesis regularization to neurons that change their weights the least (i.e., those that contribute the least to the loss function) during training. Simply looking at the L1 norm, which is not sensitive to the rate of weight changes, would not capture the same importance as described in the lottery ticket hypothesis.
A second issue we explored in the CNN was whether neurogenesis regularization impacts how information is organized within the network. Typically, better generalization is associated with more distributed coding (30). Following dropout regularization, for example, information tends to be coded in a more distributed manner, rather than in single units. One consequence of this organization is that networks tend to be more resilient to random, sequential ablation of units following dropout regularization. In contrast, we found that there was greater reliance on single units following neurogenesis regularization, and networks regularized this way were more vulnerable to random ablations (despite better generalization performance).
One way these findings might be viewed is that while neurogenesis regularization improves generalization it also introduces network vulnerabilities. That is, randomly turning over units pushes the network to rely on neurons that are tuned to single directions (via an unknown mechanism). This may improve generalization performance, but subsequent, random turnover events can also eliminate neurons that are highly tuned to single directions, and hence, have catastrophic consequences. The idea that neurogenesis might introduce network vulnerabilities is interesting since neurogenesis has been linked to forgetting (21, 39). For example, posttraining increases in neurogenesis induce forgetting of established hippocampus-dependent memories in adult rodents (4042). Whether (or not) forgetting happens may depend both on the levels and/or the timing of neurogenesis.
With respect to levels of neurogenesis, whereas low levels of neurogenesis may promote overfitting (i.e., memorization or no forgetting), high levels of neurogenesis may promote underfitting (i.e., forgetting). In between these extremes, moderate levels of neurogenesis may prevent overfitting and promote generalization (43). According to this scenario, it may be the case that neurogenesis levels need to be tightly regulated in order to balance the costs (e.g., overfitting, forgetting) vs. benefits (improved generalization) of rewiring (5). Notably, neurogenesis levels change with age and are modulated by environmental conditions (e.g., stress, enrichment) (44, 45). Therefore, the efficiency of generalization may shift across an organism’s lifespan and change with environmental circumstances. Neurogenesis also persists in adulthood in the subventricular zone, leading to the integration of new neurons in the olfactory bulb (46). Therefore, it is possible that neurogenesis also regularizes odor-based memories in the olfactory bulb.
Our analyses suggest that the timing of increases in neurogenesis (with respect to training) might also matter. When neurogenesis occurred in concert with training, we found this led to improved performance on subsequent test data. However, when neurogenesis-like turnover occurs at the time of testing we observed a decrease in performance on the unseen test data. This suggests that neurogenic networks need to keep training if they are to retain high performance. From this perspective, in future rodent work it might be useful to contrast the impact of manipulating neurogenesis on active memories (i.e., those that are undergoing training and are, therefore, invulnerable) vs. inactive memories (i.e., those where training is “complete” and therefore are vulnerable).
Our model assumes new neurons function as encoding units. An alternative possibility is that new neurons play a modulatory role, increasing network sparsity via GABAergic interneurons (47, 48). If implemented in this manner, it is likely that neurogenesis would behave similarly to dropout, which generates sparse subsets of network activity during training. This raises the possibility that neurogenesis can regularize neural circuits in additional ways beyond that studied here.
What are the implications for functional studies of hippocampal neurogenesis? The functional consequences of altering hippocampal neurogenesis (or manipulating the activity of adult-generated granule cells) have largely been studied in rodents (8, 49, 50). A major focus of these studies has been on the role of hippocampal neurogenesis in pattern separation (51, 52). In a typical experiment, an experimental intervention is introduced to alter levels of hippocampal neurogenesis, and then the ability of mice or rats to make fine spatial or contextual discriminations is assessed. In a touchscreen apparatus, for example, this might involve discriminating between stimuli presented in different spatial locations (53). Rodents with reduced neurogenesis perform poorly under conditions where spatial similarity is high, whereas rodents with elevated neurogenesis exhibit enhanced discrimination (19, 54).
The category learning task we used here is structured differently to the pattern separation tasks described above. Our task assesses the ability of the model to generalize learned patterns to new, unseen data. Nonetheless, we note that pattern separation—the capacity to distinguish closely related but categorically different stimuli—is a key to appropriate generalization. The model must distinguish between airplanes, cars, birds, etc., and so the ability to pattern separate is tied to the ability to generalize appropriately. While studies of category learning are common in monkeys (55), investigations of category learning have been less common in rodents (where neurogenesis levels can be more readily manipulated). Nonetheless, both object recognition and touchscreen-based category learning tasks have been developed for rodents (56, 57). The object category recognition tasks take advantage of rodents’ innate preference for novelty. During the study phase, mice are allowed to explore two objects from the same category (e.g., two different toy cars). During the test phase, mice are presented with a choice between a third object from the studied category (i.e., another toy car) and a novel object from a new category (e.g., hair clip). Should the mouse exhibit a preference for the object from the unstudied category (i.e., hair clip), then this suggests some within-category generalization (i.e., across different types of toy cars) (57). In the touchscreen-based category learning task, mice are trained to discriminate between two-dimensional visual stimuli presented on a touchscreen that can be categorized according to spatial features (e.g., spatial frequency and orientation of gratings). Generalization is assessed when rodents are subsequently tested on novel visual stimuli from the studied categories (56). Based on the findings presented here, we predict that suppression of adult hippocampal neurogenesis will impair generalization in such tests, whereas increasing adult hippocampal neurogenesis will facilitate performance.
While our model captures replacement/turnover as a core feature of neurogenesis, we nonetheless recognize that it is highly abstracted and differs significantly from biological networks in terms of architecture, connectivity, and sparsity. Another feature that was not captured in the current model, but is relevant especially in the context of consolidation, is replay. During sleep, previous event sequences are “replayed” in the hippocampus and this may provide an opportunity to integrate new experiences with prior, relevant experience and promote generalization (58, 59). Indeed, rapid eye movement (REM) sleep, the period of sleep during which replay events typically occur (59), is associated with enhanced memory generalization in humans (60). Interestingly, adult-born dentate granule cells that were active during learning are reactivated during subsequent REM (61). This suggests that new neurons may indeed be contributing to consolidation as a source of noisy replay, potentially promoting neural regularization and generalization of hippocampal memories. Consistent with this view, in a computational model, O’Donnell and Sejnowski found that noisy replay led to an relative increase in the overlap between input patterns and a particular target pattern (62). Such a mechanism might underlie generalization through broadening the types of inputs that would drive activation of a given memory.


The current analyses provide evidence that “wiring noise” in the hippocampus, in the form of ongoing neurogenesis, provides a means to regularize memories, thereby preventing memorization and promoting generalization. These ideas are consistent with recent computational and imaging evidence that the hippocampus supports statistical learning/generalized memories, in addition to detailed memories, in humans (63, 64), and they provide a potential mechanism. Because neurogenesis can be experimentally manipulated in rodents, these predictions can be evaluated using category learning (or related) paradigms in the future.



Code for these methods is available on GitHub (


Models were built and analyzed in Python 3.6 (65) with custom scripts that are freely available on GitHub, and were developed using the following packages: PyTorch (66), Ax (, NumPy (67), SciPy (68), Pandas (69), Matplotlib (70), Seaborn (71), and Scikit-learn 0.21.1 (72).

Computing Resources.

These experiments were implemented on the high-performance compute clusters at Compute Canada and Vector Institute for Artificial Intelligence.

Convolutional Neural Network.

The CNN involves convolutional layers that have filters that convolve to extract features of the image that are not affected by translation. Each convolutional layer is followed by a pooling layer that downsamples the images to improve computational efficiency. Finally, after multiple sets of convolutional and pooling layers, there are fully connected layers. The input layer was of size 32 × 32 × 3, corresponding to the images (with three color channels) from the CIFAR-10 dataset. The CNN was built with three sets of two convolutional layers, each followed by a max-pooling layer. The convolutional layer sets had 16, 32, and 64 filters, respectively, with a 3 × 3 filter size and stride (steps the filter moves along) of 1. The max-pooling layers pool a 2 × 2 region and are applied at strides of 1. The convolutional layers are then connected to three fully connected layers, each with 250 units, using a rectified linear activation function. Neurogenesis occurred in the middle, fully connected layer. Data were split into training (40,000 images), validation (10,000 images), and test sets (10,000 images). Test results are derived from networks trained on the full training set. We tuned the network’s hyperparameters by training on the training set (40,000 images), and testing on the validation set (10,000 images). The hyperparameters, i.e., values that regulate learning in the networks, and other information about the networks are listed in Table 1.
Table 1.
Table of CNN model parameters
Batch size4
Learning rate0.0002
Turnover proportion0.032
Turnover frequency1/640 batch updates
Weight decay0.00001
Neural noise (log normal: mean, std)−0.2, 0.5
Excitation factor (c)1.3


The CIFAR-10 dataset is a collection of images that consists of 50,000 training and 10,000 test images for each of 10 classes commonly used to train machine learning and computer vision models. The images are 32 × 32 color images with the 10 different classes representing airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks.

Training Neurogenic Networks.

We found that neural networks with neurogenesis can be trained using stochastic gradient descent (73). During training, neurogenesis was implemented on an ongoing basis, with multiple turnover events occurring. Neurogenesis occurred in the second hidden layer (of three). Neurogenesis was implemented as a replacement/turnover mechanism, whereby a randomly chosen subset of neurons in the layer was turned over such that their weights were reinitialized to new values. In contrast, the other neurons maintained their learned weights. We initialized the new neurons’ weights using the same function used when randomly initializing the network at the start of training, i.e., Kaiming uniform initialization (74). Turnover can occur at a frequency of once every n batch updates. We tuned the hyperparameters using Bayesian optimization (BO), which is an iterative parameter tuning process that builds a probability model for the best parameters to try next (75), whereby we input the turnover frequency (ranging from once every update to once every 12,500 updates), and how many neurons to add at each turnover event (ranging from 0 to 250 neurons) as parameters to search for. We found that turning over 8 neurons every 640 updates in the second fully connected layer resulted in the highest performance on the validation set. We implemented neurogenesis in the output layers of a CNN.

Enhancing Excitability.

To evaluate whether manipulating “excitability” of new neurons might impact regularization, in a separate experiment we incorporated an excitability component into our neural networks in each forward pass during training. We multiplied the activations of the neurogenesis layer by an excitability array. The activations corresponding to the new neurons were set to a factor (1.3; determined by hyperparameter tuning), and all other activations were set to 1. As a result, only the activations corresponding to the turned over neurons were enhanced, and the remainder of the neurons’ activations were unchanged. Whenever a new set of neurons was turned over, the previous set of new neurons became “mature” and were no longer “excited” in the forward pass.

Other Regularization Methods.

To compare the performance of neurogenesis to other regularization methods, we also used dropout, weight decay, and neural noise in the CNNs. Dropout involves the stochastic silencing of a subset of units during each forward pass (25). We used a dropout rate of 0.2 in CNNs, with an additional seven epochs required to reach the end of training, compared to nondropout networks. Weight decay consists of adding a small penalty to the loss function that penalizes large weights, thus resulting in an overall decay of larger weights (26). We used a weight decay value of 0.00001 for our CNN. Neural noise involves the addition of Gaussian noise to all the activations during each forward pass (27). We defined our noise using a mean of −0.2, and a SD of 0.5 on a log normal distribution for our CNN. For each of these methods (dropout, weight decay, and neural noise), we tuned the hyperparameters using BO, which is a parameter tuning process that iteratively builds probability models for parameter selection to find the highest performing parameter(s) (75)). We used BO tuning to identify the optimal values for the following parameters in each method:
Dropout: Dropout rate (range from 0 to 1)
Weight decay: Weight decay penalty (range from 1 × 10−10 to 1 × 10−1, on a log 10 scale)
Neural noise: Mean (range from −1 to 1) and SD (range from 0 to 1) of a log normal distribution.

Ablation Experiments.

To measure the importance of individual neurons in a network, we tested how much the network’s performance degrades as progressively more neurons are removed from the network (based on Morcos et al.) (30). To remove a neuron, we set that neuron’s activity to a fixed value of 0, effectively ablating the unit. We progressively ablated neurons in proportional steps of 5% of the neurons in the neurogenesis layer, testing the accuracy of the training data at each step to generate ablation curves. We repeated each ablation five times and randomized the order of neurons ablated each time. Ablation curves plot the degradation in accuracy as more neurons are ablated. Networks that rely more heavily on individual units will drop their accuracy more quickly as units are ablated. Networks that are less sensitive to ablations have been shown to correlate with better generalization (30).

Class Selectivity.

Class selectivity was calculated using the method described in ref. 30. The class-conditional mean activity was calculated from the test set and the class selectivity index was measured as:
where µmax is the highest mean class activity and µ−max is the mean class activity across all other classes.

Targeted Neurogenesis.

To test whether a targeted approach to removing new neurons might improve the performance of neurogenic networks, we implemented a targeted turnover of new neurons. Based on the work in ref. 29, we adapted the targeted unit dropout technique to choose which units to replace in the network with naive units. It was found that this method of targeting neurons for dropout performed better than a random dropout process for identifying pruned networks. While identifying pruned networks is not the goal of our work, we could use their methods to identify important and unimportant units to target for turnover. The unit’s importance was determined using the unit L1 norm (i.e., the sum of the absolute values of all the weights inputting onto the unit) and ranking them from lowest to highest value and importance. To test whether targeting neurogenesis to the low importance neurons would improve performance, a proportion of the lowest ranking neurons matching the proportion of neural turnover had their weights reset. As a positive control, we also targeted the highest importance neurons (highest L1 norm values) for turnover to confirm that this metric does indeed carry information about the importance of neurons for learning.

Statistical Analyses and Plotting.

All statistics were performed using the scipy.stats module in Python 3.6 (65, 68). Error bars on graphs represent the SEM across different initializations of the model, where each experiment is repeated 20 times unless otherwise stated. Comparisons were made using unpaired, two-tailed t tests, or analysis of variance (ANOVA) followed by Tukey’s honestly significant difference (HSD) post hoc tests where appropriate. Significance indicated by an asterisk (*) for P values less than 0.01 unless otherwise stated. Graphs were generated using the matplotlib, and seaborn packages in Python 3.6 (70, 71), and figures were compiled using Inkscape (76).

Data, Materials, and Software Availability

All code used to generate the experiments have been deposited in GitHub ( (77).


We thank Jason Snyder for comments on earlier drafts of this manuscript. This work was supported by a Canadian Institutes of Health Research (FDN143227) grant to P.W.F. and the Canadian Institute for Advanced Research (CIFAR) catalyst grant (to S.A.J., B.A.R., and P.W.F.). L.M.T. was supported by fellowships from The Hospital for Sick Children, the Vector Institute, and the Natural Sciences and Engineering Research Council of Canada. B.A.R. and P.W.F. are fellows in the Learning in Machines and Brains and the Child, Brain and Development programs, respectively, at CIFAR.

Supporting Information

Appendix 01 (PDF)


A. A. Faisal, L. P. J. Selen, D. M. Wolpert, Noise in the nervous system. Nat. Rev. Neurosci. 9, 292–303 (2008).
M. D. McDonnell, L. M. Ward, The benefits of noise in neural systems: Bridging theory and experiment. Nat. Rev. Neurosci. 12, 415–426 (2011).
G. E. Hinton, D. Camp, “Keeping the neural networks simple by minimizing the description length of the weights” in Proceedings of the Sixth Annual Conference on Computational Learning Theory, L. Pitt, Ed. 5–13 (Association for Computing Machinery, 1993).
E. Hoel, The overfitted brain: Dreams evolved to assist generalization. Patterns (N Y) 2, 100244 (2021).
B. A. Richards, P. W. Frankland, The persistence and transience of memory. Neuron 94, 1071–1084 (2017).
D. N. Abrous, M. Koehl, M. Le Moal, Adult neurogenesis: From precursors to network and physiology. Physiol. Rev. 85, 523–569 (2005).
A. Denoth-Lippuner, S. Jessberger, Formation and integration of new neurons in the adult hippocampus. Nat. Rev. Neurosci. 22, 223–236 (2021).
J. T. Gonçalves et al., In vivo imaging of dendritic pruning in dentate granule cells. Nat. Neurosci. 19, 788–791 (2016).
G. L. Ming, H. Song, Adult neurogenesis in the mammalian central nervous system. Annu. Rev. Neurosci. 28, 223–250 (2005).
T. Ciric, S. P. Cahill, J. S. Snyder, Dentate gyrus neurons that are born at the peak of development, but not before or after, die in adulthood. Brain Behav. 9, e01435 (2019).
J. D. Cole et al., Adult-born hippocampal neurons undergo extended development and are morphologically distinct from neonatally-born neurons. J. Neurosci. 40, 5740–5756 (2020).
T. J. Draelos et al., “Neurogenesis deep learning: Extending deep networks to accommodate new classes” in 2017 International Joint Conference on Neural Networks (IJCNN) (2017), pp. 526–533.
A. Krizhevsky, Learning multiple layers of features from tiny images (2009).
C. V. Dieni et al., Low excitatory innervation balances high intrinsic excitability of immature dentate neurons. Nat. Commun. 7, 11313 (2016).
F. Doetsch, R. Hen, Young and excitable: The function of new neurons in the adult mammalian brain. Curr. Opin. Neurobiol. 15, 121–128 (2005).
A. Marín-Burgin, L. A. Mongiat, M. B. Pardi, A. F. Schinder, Unique processing during a period of high excitation/inhibition balance in adult-born neurons. Science 335, 1238–1242 (2012).
L. A. Mongiat, M. S. Espósito, G. Lombardi, A. F. Schinder, Reliable activation of immature neurons in the adult hippocampus. PLoS One 4, e5320 (2009).
C. Schmidt-Hieber, P. Jonas, J. Bischofberger, Enhanced synaptic plasticity in newly generated granule cells of the adult hippocampus. Nature 429, 184–187 (2004).
D. J. Creer, C. Romberg, L. M. Saksida, H. van Praag, T. J. Bussey, Running enhances spatial pattern separation in mice. Proc. Natl. Acad. Sci. U.S.A. 107, 2367–2372 (2010).
A. Sahay et al., Increasing adult hippocampal neurogenesis is sufficient to improve pattern separation. Nature 472, 466–470 (2011).
P. W. Frankland, S. Köhler, S. A. Josselyn, Hippocampal neurogenesis and forgetting. Trends Neurosci. 36, 497–503 (2013).
R. A. Chambers, M. N. Potenza, R. E. Hoffman, W. Miranker, Simulated apoptosis/neurogenesis regulates learning and memory capabilities of adaptive neural networks. Neuropsychopharmacology 29, 747–758 (2004).
K. Deisseroth et al., Excitation-neurogenesis coupling in adult neural stem/progenitor cells. Neuron 42, 535–552 (2004).
L. A. Meltzer, R. Yabaluri, K. Deisseroth, A role for circuit homeostasis in adult neurogenesis. Trends Neurosci. 28, 653–660 (2005).
N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout : A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).
A. Krogh, J. A. Hertz, A simple weight decay can improve generalization. Adv. Neural Inf. Process. Syst. 4, 950–957 (1992).
C. M. Bishop, Training with noise is equivalent to Tikhonov regularization. Neural Comput. 7, 108–116 (1995).
M. Yasuda et al., Multiple forms of activity-dependent competition refine hippocampal circuits in vivo. Neuron 70, 1128–1142 (2011).
A. N. Gomez et al., Learning sparse networks using targeted dropout. ArXiv [Preprint] (2019). (Accessed 10 November 2020).
A. S. Morcos, D. G. T. Barrett, N. C. Rabinowitz, M. Botvinick, On the importance of single directions for generalization. ArXiv [Preprint] (2018). (Accessed 9 November 2020).
K. L. McKee, I. C. Crandell, R. Chaudhuri, R. C. O’Reilly, Locally learned synaptic dropout for complete bayesian inference. ArXiv [Preprint] (2021). (Accessed 23 August 2022).
G. Tononi, C. Cirelli, Sleep function and synaptic homeostasis. Sleep Med. Rev. 10, 49–62 (2006).
P. R. Rapp, M. Gallagher, Preserved neuron number in the hippocampus of aged rats with spatial learning deficits. Proc. Natl. Acad. Sci. U.S.A. 93, 9926–9930 (1996).
L. M. Tran, S. A. Josselyn, B. A. Richards, P. W. Frankland, Forgetting at biologically realistic levels of neurogenesis in a large-scale hippocampal model. Behav. Brain Res. 376, 112180 (2019).
E. P. Moreno-Jiménez et al., Adult hippocampal neurogenesis is abundant in neurologically healthy subjects and drops sharply in patients with Alzheimer’s disease. Nat. Med. 25, 554–560 (2019).
K. L. Spalding et al., Dynamics of hippocampal neurogenesis in adult humans. Cell 153, 1219–1227 (2013).
G. Kempermann, H. G. Kuhn, F. H. Gage, More hippocampal neurons in adult mice living in an enriched environment. Nature 386, 493–495 (1997).
J. Frankle, M. Carbin, The lottery ticket hypothesis: Finding sparse, trainable neural networks. ArXiv [Preprint] (2019). (Accessed 28 October 2020).
T. J. Ryan, P. W. Frankland, Forgetting as a form of adaptive engram cell plasticity. Nat. Rev. Neurosci. 23, 173–186 (2022).
K. G. Akers et al., Hippocampal neurogenesis regulates forgetting during adulthood and infancy. Science 344, 598–602 (2014).
J. R. Epp, R. Silva Mera, S. Köhler, S. A. Josselyn, P. W. Frankland, Neurogenesis-mediated forgetting minimizes proactive interference. Nat. Commun. 7, 10838 (2016).
A. Gao et al., Elevation of hippocampal neurogenesis induces a temporally graded pattern of forgetting of contextual fear memories. J. Neurosci. 38, 3190–3198 (2018).
S. Y. Ko, P. W. Frankland, Neurogenesis-dependent transformation of hippocampal engrams. Neurosci. Lett. 762, 136176 (2021).
G. D. Clemenson, W. Deng, F. H. Gage, Environmental enrichment and neurogenesis: From mice to humans. Curr. Opin. Behav. Sci. 4, 56–62 (2015).
E. Gould, P. Tanapat, Stress and hippocampal neurogenesis. Biol. Psychiatry 46, 1472–1479 (1999).
G. Lepousez, M. T. Valley, P.-M. Lledo, The impact of adult neurogenesis on olfactory bulb circuits and computations. Annu. Rev. Physiol. 75, 339–363 (2013).
T. Ikrar et al., Adult neurogenesis modifies excitability of the dentate gyrus. Front. Neural Circuits 7, 204 (2013).
B. H. Singer et al., Compensatory network changes in the dentate gyrus restore long-term potentiation following ablation of neurogenesis in young-adult mice. Proc. Natl. Acad. Sci. U.S.A. 108, 5437–5442 (2011).
C. Anacker, R. Hen, Adult hippocampal neurogenesis and cognitive flexibility—Linking memory and mood. Nat. Rev. Neurosci. 18, 335–346 (2017).
H. A. Cameron, L. R. Glover, Adult neurogenesis: Beyond learning and memory. Annu. Rev. Psychol. 66, 53–81 (2015).
A. Sahay, D. A. Wilson, R. Hen, Pattern separation: A common function for new neurons in hippocampus and olfactory bulb. Neuron 70, 582–588 (2011).
A. Santoro, Reassessing pattern separation in the dentate gyrus. Front. Behav. Neurosci. 7, 96 (2013).
C. A. Oomen et al., The touchscreen operant platform for testing working memory and pattern separation in rats and mice. Nat. Protoc. 8, 2006–2021 (2013).
C. D. Clelland et al., A functional role for adult hippocampal neurogenesis in spatial pattern separation. Science 325, 210–213 (2009).
F. G. Ashby, B. J. Spiering, The neurobiology of category learning. Behav. Cogn. Neurosci. Rev. 3, 101–113 (2004).
M. B. Broschard, J. Kim, B. C. Love, J. H. Freeman, Category learning in rodents using touchscreen-based tasks. Genes Brain Behav. 20, e12665 (2021).
S. D. Creighton et al., Development of an “object category recognition” task for mice: Involvement of muscarinic acetylcholine receptors. Behav. Neurosci. 133, 527–536 (2019).
D. Ji, M. A. Wilson, Coordinated memory replay in the visual cortex and hippocampus during sleep. Nat. Neurosci. 10, 100–107 (2007).
K. Louie, M. A. Wilson, Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep. Neuron 29, 145–156 (2001).
I. Lerner, S. M. Lupkin, A. Tsai, A. Khawaja, M. A. Gluck, Sleep to remember, sleep to forget: Rapid eye movement sleep can have inverse effects on recall and generalization of fear memories. Neurobiol. Learn. Mem. 180, 107413 (2021).
D. Kumar et al., Sparse activity of hippocampal adult-born neurons during REM sleep is necessary for memory consolidation. Neuron 107, 552–565.e10 (2020).
C. O’Donnell, T. J. Sejnowski, Selective memory generalization by spatial patterning of protein synthesis. Neuron 82, 398–412 (2014).
A. Schapiro, N. Turk-Browne, Statistical learning. Brain Mapp. 3, 501–506 (2015).
J. Sučević, A. C. Schapiro, A neural network model of hippocampal contributions to category learning. bioRxiv [Preprint] (2022). (Accessed 13 January 2022).
G. Van Rossum, F. L. Drake, Python 3 Reference Manual (CreateSpace, 2009).
A. Paszke et al., “PyTorch: An imperative style, high-performance deep learning library” in Advances in Neural Information Processing Systems 32, H. Wallach and H. Larochelle and A. Beygelzimer and F. d'Alché-Buc and E. Fox and R. Garnett, Eds. (Curran Associates, Inc., 2019), pp. 8024–8035.
T. E. Oliphant, A Guide to NumPy (Trelgol Publishing, 2006).
P. Virtanen et al., SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272 (2020). Correction in: Nat. Methods 17, 352 (2020).
W. McKinney, “Data structures for statistical computing in Python” in Proceedings of the 9th Python in Science Conference, S. van der Walt and J. Millman, Eds. (Austin, TX, 2010), pp. 51–56.
J. D. Hunter, Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 9, 90–95 (2007).
M. Waskom et al., mwaskom/seaborn: v0.8.1 (2017). Accessed September 2017.
F. Pedregosa et al., Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
H. Robbins, S. Monro, A stochastic approximation method. Ann. Math. Stat., 22(3), 400–407 (1951).(
K. He, X. Zhang, S. Ren, J. Sun, Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. ArXiv [Preprint] (2015). (Accessed 25 February 2021).
J. Snoek, H. Larochelle, R. P. Adams, “Practical Bayesian optimization of machine learning algorithms” in Advances in Neural Information Processing Systems F. Pereira and C.J. Burges and L. Bottou and K.Q. Weinberger, Eds.(Curran Associates, Inc., 2012).
Inkscape Project, Inkscape (2020). (Accessed 15 January 2021).
L. M. Tran, DNN Neurogenesis, GitHub repository, Deposited 28 January 2022.

Information & Authors


Published in

Go to Proceedings of the National Academy of Sciences
Proceedings of the National Academy of Sciences
Vol. 119 | No. 45
November 8, 2022
PubMed: 36322739


Data, Materials, and Software Availability

All code used to generate the experiments have been deposited in GitHub ( (77).

Submission history

Received: April 17, 2022
Accepted: September 11, 2022
Published online: November 2, 2022
Published in issue: November 8, 2022


  1. adult neurogenesis
  2. memory
  3. machine learning
  4. regularization
  5. generalization


We thank Jason Snyder for comments on earlier drafts of this manuscript. This work was supported by a Canadian Institutes of Health Research (FDN143227) grant to P.W.F. and the Canadian Institute for Advanced Research (CIFAR) catalyst grant (to S.A.J., B.A.R., and P.W.F.). L.M.T. was supported by fellowships from The Hospital for Sick Children, the Vector Institute, and the Natural Sciences and Engineering Research Council of Canada. B.A.R. and P.W.F. are fellows in the Learning in Machines and Brains and the Child, Brain and Development programs, respectively, at CIFAR.


This article is a PNAS Direct Submission.



Neuroscience and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
Department of Physiology, University of Toronto, Toronto, ON, Canada
Vector Institute, Toronto, ON, Canada
Adam Santoro
DeepMind, London, United Kingdom
Lulu Liu
Neuroscience and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
Sheena A. Josselyn
Neuroscience and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
Department of Physiology, University of Toronto, Toronto, ON, Canada
Department of Psychology, University of Toronto, Toronto, ON, Canada
Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
Blake A. Richards
Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
School of Computer Science, McGill University, Montreal, QC, Canada
Mila, Montreal, QC, Canada
Learning in Machines and Brains, Canadian Institute for Advanced Research, Toronto, ON, Canada
Neuroscience and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
Department of Physiology, University of Toronto, Toronto, ON, Canada
Department of Psychology, University of Toronto, Toronto, ON, Canada
Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, ON, Canada


To whom correspondence may be addressed. Email: [email protected].
Author contributions: L.M.T., A.S., B.A.R., and P.W.F. designed research; L.M.T. performed research; L.M.T. and L.L. analyzed data; and L.M.T., A.S., S.A.J., B.A.R., and P.W.F. wrote the paper.

Competing Interests

The authors declare no competing interest.

Metrics & Citations


Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.

Citation statements



If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by


    View Options

    View options

    PDF format

    Download this article as a PDF file


    Get Access

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Personal login Institutional Login

    Recommend to a librarian

    Recommend PNAS to a Librarian

    Purchase options

    Purchase this article to access the full text.

    Single Article Purchase

    Adult neurogenesis acts as a neural regularizer
    Proceedings of the National Academy of Sciences
    • Vol. 119
    • No. 45







    Share article link

    Share on social media