New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology
Shaping the learning landscape in neural networks around wide flat minima
Edited by Yuhai Tu, IBM, Yorktown Heights, NY, and accepted by Editorial Board Member Herbert Levine November 20, 2019 (received for review May 20, 2019)

Significance
Deep neural networks (DNN) are becoming fundamental learning devices for extracting information from data in a variety of real-world applications and in natural and social sciences. The learning process in DNN consists of finding a minimizer of a loss function that measures how well the data are classified. This optimization task is typically solved by tuning millions of parameters by stochastic gradient algorithms. This process can be thought of as an exploration process of a highly nonconvex landscape. Here we show that such landscapes possess very peculiar wide flat minima and that the current models have been shaped to make the loss functions and the algorithms focus on those minima. We also derive efficient algorithmic solutions.
Abstract
Learning in deep neural networks takes place by minimizing a nonconvex high-dimensional loss function, typically by a stochastic gradient descent (SGD) strategy. The learning process is observed to be able to find good minimizers without getting stuck in local critical points and such minimizers are often satisfactory at avoiding overfitting. How these 2 features can be kept under control in nonlinear devices composed of millions of tunable connections is a profound and far-reaching open question. In this paper we study basic nonconvex 1- and 2-layer neural network models that learn random patterns and derive a number of basic geometrical and algorithmic features which suggest some answers. We first show that the error loss function presents few extremely wide flat minima (WFM) which coexist with narrower minima and critical points. We then show that the minimizers of the cross-entropy loss function overlap with the WFM of the error loss. We also show examples of learning devices for which WFM do not exist. From the algorithmic perspective we derive entropy-driven greedy and message-passing algorithms that focus their search on wide flat regions of minimizers. In the case of SGD and cross-entropy loss, we show that a slow reduction of the norm of the weights along the learning process also leads to WFM. We corroborate the results by a numerical study of the correlations between the volumes of the minimizers, their Hessian, and their generalization performance on real data.
Artificial neural networks (ANN), currently also known as deep neural networks (DNN) when they have more than 2 layers, are powerful nonlinear devices used to perform different types of learning tasks (1). From the algorithmic perspective, learning in ANN is in principle a hard computational problem in which a huge number of parameters, the connection weights, need to be optimally tuned. Yet, at least for supervised pattern recognition tasks, learning has become a relatively feasible process in many applications across domains and the performances reached by DNNs have had a huge impact on the field of machine learning.
DNN models have evolved very rapidly in the last decade, mainly by an empirical trial and selection process guided by heuristic intuitions. As a result, current DNN are in a sense akin to complex physical or biological systems, which are known to work but for which a detailed understanding of the principles underlying their functioning remains unclear. The tendency to learn efficiently and to generalize with limited overfitting are 2 properties that often coexist in DNN, and yet a unifying theoretical framework is still missing.
Here we provide analytical results on the geometrical structure of the loss landscape of ANN which shed light on the success of deep learning (2) algorithms and allow us to design efficient algorithmic schemes.
We focus on nonconvex 1- and 2-layer ANN models that exhibit sufficiently complex behavior and yet are amenable to detailed analytical and numerical studies. Building on methods of statistical physics of disordered systems, we analyze the complete geometrical structure of the minimizers of the loss function of ANN learning random patterns and discuss how the current DNN models are able to exploit such structure, for example starting from the choice of the loss function, avoiding algorithmic traps, and reaching rare solutions that belong to wide flat regions of the weight space. In our study the notion of flatness is given in terms of the volume of the weights around a minimizer that do not lead to an increase of the loss value. This generalizes the so-called local entropy of a minimizer (3), defined for discrete weights as the log of the number of optimal weights assignments within a given Hamming distance from the reference minimizer. We call these regions high local entropy (HLE) regions for discrete weights or wide flat minima (WFM) for continuous weights. Our results are derived analytically for the case of random data and corroborated by numerics on real data. In order to eliminate ambiguities that may arise from changes of scale of the weights, we control the norm of the weights in each of the units that compose the network. The outcomes of our study can be summarized as follows.
1) We show analytically that ANN learning random patterns possess the structural property of having extremely robust regions of optimal weights, namely WFM of the loss, whose existence is important to achieve convergence in the learning process. Although these wide minima are rare compared to the dominant critical points (absolute narrow minima, local minima, or saddle points in the loss surface), they can be accessed by a large family of simple learning algorithms. We also show analytically that other learning machines, such as the parity machine, do not possess WFM.
2) We show analytically that the choice of the cross-entropy (CE) loss function has the effect of biasing learning algorithms toward HLE or WFM regions.
3) We derive a greedy algorithm—entropic least-action learning (eLAL)—and a message passing algorithm—focusing belief propagation (fBP)—which zoom in their search on wide flat regions of minimizers.
4) We compute the volumes associated to the minimizers found by different algorithms using belief propagation (BP).
5) We show numerically that the volumes correlate well with the spectra of the Hessian on computationally tractable networks and with the generalization performance on real data. The algorithms that search for WFM display a spectrum that is much more concentrated around zero eigenvalues compared to plain stochastic gradient descent (SGD).
Our results on random patterns support the conclusion that the minimizers that are relevant for learning are not the most frequent isolated and narrow ones (which also are computationally hard to sample) but the rare ones that are extremely wide. While this phenomenon was recently disclosed for the case of discrete weights (3, 4), here we demonstrate that it is present also in nonconvex ANN with continuous weights. Building on these results we derive algorithmic schemes and shed light on the performance of SGD with the CE loss function. Numerical experiments suggest that the scenario generalizes to real data and is consistent with other numerical results on deeper ANN (5).
HLE/WFM Regions Exist in Nonconvex Neural Devices Storing Random Patterns
In what follows we analyze the geometrical structure of the weights space by considering the simplest nonconvex neural devices storing random patterns: the single-layer network with discrete weights and the 2-layer networks with both continuous and discrete weights. The choice of random patterns, for which no generalization is possible, is motivated by the possibility of using analytical techniques from statistical physics of disordered systems and by the fact that we want to identify structural features that do not depend on specific correlation patterns of the data.
The Simple Example of Discrete Weights.
In the case of binary weights it is well known that even for the single-layer network the learning problem is computationally challenging. Therefore, we begin our analysis by studying the so-called binary perceptron, which maps vectors of N inputs
Given a training set composed of
The standard analysis of this model is based on the study of the zero-temperature limit of the Gibbs measure with a loss (or energy) function
Local Entropy Theory.
The existence of effective learning algorithms indicates that the traditional statistical physics calculations, which focus on the set of solutions that dominate the zero-temperature Gibbs measure (i.e., the most numerous ones), are effectively blind to the solutions actually found by such algorithms. Numerical evidence suggests that in fact the solutions found by heuristics are not at all isolated; on the contrary, they appear to belong to regions with a high density of nearby other solutions. This puzzle has been solved very recently by an appropriate large deviations study (3, 4, 13, 14) in which the tools of statistical physics have been used to study the most probable value of the local entropy of the loss function, that is, a function that is able to detect the existence of regions with an
The probability measure Eq. 8 can be written in an equivalent form that generalizes to the nonzero errors regime, is analytically simpler to handle, and leads to novel algorithmic schemes (4):
The crucial property of Eq. 10 comes from the observation that by choosing y to be a nonnegative integer, the partition function can be rewritten as
Two-Layer Networks with Continuous Weights.
As for the discrete case, we are able to show that in nonconvex networks with continuous weights the WFMs exist and are rare and yet accessible to simple algorithms. In order to perform an analytic study, we consider the simplest nontrivial 2-layer neural network, the committee machine with nonoverlapping receptive fields. It consists of N input units, one hidden layer with K units and one output unit. The input units are divided into K disjoint sets of
As for the binary perceptron, the learning problem consists of mapping each of the random input patterns
The committee machine was studied extensively in the 1990s (15⇓–17). The capacity of the network can be derived by computing the typical weight space volume as a function of the number of correctly classified patterns
For the smallest nontrivial value of K,
The capacity of the network, above which perfect classification becomes impossible, is found to be
The Existence of WFM.
In order to detect the existence of WFM we use a large deviation measure which is the continuous version of the measure used in the discrete case: Each configuration of the weights is reweighted by a local volume term, analogously to the analysis in Local Entropy Theory. For the continuous case, however, we adopt a slightly different formalism which simplifies the analysis. Instead of constraining the set of y real replicas† to be at distance D from a reference weight vector, we can identify the same WFM regions by constraining them directly to be at a given mutual overlap: For a given value
Normalized local entropy
The case
The results of the above WFM computation may require small corrections due to RSB effects, which, however, are expected to be very tiny due to the compact nature of the space of solutions at small distances.
A more informative aspect is to study the volumes around the solutions found by different algorithms. This can be done by the BP method, similarly to the computation of the weight enumerator function in error correcting codes (21).
Not All Devices Are Appropriate: The Parity Machine Does Not Display HLE/WFM Regions.
The extent by which a given model exhibits the presence of WFM can vary (see, e.g., Fig. 1). A direct comparison of the local entropy curves on different models in general does not yet have a well-defined interpretation, although at least for similar architectures it can still be informative (22). On the other hand, the existence of WFM itself is a structural property. For neural networks, its origin relies on the threshold sum form of the nonlinearity characterizing the formal neurons. As a check of this claim, we can analyze a model that is in some sense complementary, namely the so-called parity machine. We take its network structure to be identical to the committee machine, except for the output unit, which performs the product of the K hidden units instead of taking a majority vote. While the outputs of the hidden units are still given by sign activations, Eq. 14, the overall output of the network reads
Parity machines are closely related to error-correcting codes based on parity checks. The geometrical structure of the absolute minima of the error loss function is known (20) to be composed of multiple regions, each in one to one correspondence with the internal representations of the patterns. For random patterns such regions are typically tiny and we expect the WFM to be absent. Indeed, the computation of the volume proceeds analogously to the previous case§ , and it shows that in this case for any distance the volumes of the minima are always bounded away from the maximal possible volume, that is, the volume one would find for the same distance when no patterns are stored. The log-ratio of the 2 volumes is constant and equal to
The Connection between Local Entropy and CE.
Given that dense regions of optimal solutions exist in nonconvex ANN, at least in the case of independent random patterns, it remains to be seen which role they play in current models. Starting with the case of binary weights, and then generalizing the result to more complex architectures and to continuous weights, we can show that the most widely used loss function, the so-called CE loss, focuses precisely on such rare regions (see ref. 23 for the case of stochastic weights).
For the sake of simplicity, we consider a binary classification task with one output unit. The CE cost function for each input pattern reads
Mean error rate achieved when optimizing the CE loss in the binary single-layer network, as predicted by the replica analysis, at various values of γ (increasing from top to bottom). The figure also shows the points
For the binary case, however, the norm is fixed and thus we must keep γ as an explicit parameter. Note that since
We proceed by first showing that the minimizers of this loss correspond to near-zero errors for a wide range of values of α and then by showing that these minimizers are surrounded by an exponential number of zero error solutions.
In order to study the probability distribution of the minima of
Having established that by minimizing the CE one ends up in regions of perfect classification where the error loss function is essentially zero, it remains to be understood which type of configurations of weights are found. Does the CE converge to an isolated point-like solution in the weight space (such as the typical zero energy configurations of the error function)¶ or does it converge to the rare regions of HLE?
In order to establish the geometrical nature of the typical minima of the CE loss, we need to compute the average value of
Average local entropy around a typical minimum of the
As an algorithmic check we have verified that while a simulated annealing approach gets stuck at very high energies when trying to minimize the error loss function, the very same algorithm with the CE loss is indeed successful up to relatively high values of α, with just slightly worse performance compared to an analogous procedure based on local entropy (13). In other words, the CE loss on single-layer networks is a computationally cheap and reasonably good proxy for the LE loss. These analytical results extend straightforwardly to 2-layer networks with binary weights. The study of continuous weight models can be performed resorting to the BP method.
BP and fBP
BP, also known as sum-product, is an iterative message-passing algorithm for statistical inference. When applied to the problem of training a committee machine with a given set of input–output patterns, it can be used to obtain, at convergence, useful information on the probability distribution, over the weights of the network, induced by the Gibbs measure. In particular, it allows one to compute the marginals of the weights as well as their entropy, which in the zero-temperature regime is simply the logarithm of the volume of the solutions, Eq. 15, rescaled by the number of variables N. The results are approximate, but (with high probability) they approach the correct value in the limit of large N in the case of random uncorrelated inputs, at least in the replica-symmetric phase of the space of the parameters. Due to the concentration property, in this limit the macroscopic properties of any given problem (such as the entropy) tend to converge to a common limiting case, and therefore a limited amount of experiments with a few samples is sufficient to describe very well the entire statistical ensemble.
We have used BP to study the case of the zero-temperature tree-like committee machine with continuous weights and
However, BP can be employed to perform additional analyses as well. In particular, it can be modified rather straightforwardly to explore and describe the region surrounding any given configuration, as it allows one to compute the local entropy (i.e., the log-volume of the solutions) for any given distance and any reference configuration (this is a general technique; the details for our case are reported in SI Appendix). The convergence issues are generally much less severe in this case. Even in the RSB phase, if the reference configuration is a solution in a wide minimum, the structure is locally replica-symmetric, and therefore the algorithm converges and provides accurate results, at least up to a value of the distance where other unconnected regions of the solutions space come into consideration. In our tests, the only other issue arose occasionally at very small distances, where convergence is instead prevented by the loss of accuracy stemming from finite size effects and limited numerical precision.
Additionally, the standard BP algorithm can be modified and transformed into a (very effective) solver. There are several ways to do this, most of which are purely heuristic. However, it was shown in ref. 4 that adding a particular set of self-interactions to the weight variables could approximately but effectively describe the replicated system of Eq. 12. In other words, this technique can be used to analyze the local-entropy landscape instead of the Gibbs one. By using a sufficiently large number of replicas y (we generally used
eLAL
Least-action learning (LAL), also known as minimum-change rule (26⇓–28), is a greedy algorithm that was designed to extend the well-known perceptron algorithm to the case of committee machines with a single binary output and sign activation functions. It takes one parameter, the learning rate η. In its original version, patterns are presented randomly one at a time, and at most one hidden unit is affected at a time. In case of correct output, nothing is done, while in case of error the hidden unit, among those with a wrong output, whose preactivation was closest to the threshold (and is thus the easiest to fix) is selected, and the standard perceptron learning rule (with rate η) is applied to it. In our tests we simply extended it to work in mini-batches, to make it more directly comparable with stochastic-gradient-based algorithms: For a given mini-batch, we first compute all of the preactivations and the outputs for all patterns, then we apply the LAL learning rule for each pattern in turn.
This algorithm proves to be surprisingly effective at finding minima of the NE loss very quickly: In the random patterns case, its algorithmic capacity is higher than gradient-based variants and almost as high as fBP, and it requires comparatively few epochs. It is also computationally very fast, owing to its simplicity. However, as we show in Numerical Studies, it finds solutions that are much narrower compared to those of other algorithms.
In order to drive LAL toward WFM regions, we add a local-entropy component to it, by applying the technique described in ref. 4 (see Eq. 13): We run y replicas of the system in parallel and we couple them with an elastic interaction. The resulting algorithm, which we call eLAL, can be described as follows. We initialize y replicas randomly with weights
This simple scheme proves rather effective at enhancing the wideness of the minima found while still being computationally efficient and converging quickly, as we show in Numerical Studies.
Numerical Studies
We conclude our study by comparing numerically the curvature, the wideness of the minima, and the generalization error found by different approaches. We consider 2 main scenarios: One, directly comparable with the theoretical calculations, where a tree committee machine with
We compare several training algorithms with different settings (Materials and Methods): stochastic GD with the CE loss (ceSGD), LAL and its entropic version eLAL, and fBP. Of these, the nongradient-based ones (LAL, eLAL, and fBP) can be directly used with the sign activation functions (Eq. 14) and the NE loss. On the other hand, ceSGD requires a smooth loss landscape, and therefore we used tanh activations, adding a gradually diverging parameter β in their argument, since
In all cases—for uniformity of comparison, simplicity, and consistency with the theoretical analysis—we consider scenarios in which the training error (i.e., the NE loss) gets to zero. This is, by definition, the stopping condition for the LAL algorithm. We also used this as a stopping criterion for ceSGD in the “fast” setting. For the other algorithms, the stopping criterion was based on reaching a sufficiently small loss (ceSGD in the “slow” setting), or the collapse of the replicas (eLAL and fBP).
The analysis of the quality of the results was mainly based on the study of the local loss landscape at the solutions. On one hand, we computed the normalized local entropy using BP as described in a previous section, which provides a description of the NE landscape. On the other hand, we also computed the spectrum of the eigenvalues of a smoothed-out version of the NE loss, namely the mean square error (MSE) loss computed on networks with tanh activations. This loss depends on the parameters β of the activations: We set β to be as small as possible (maximizing the smoothing and thereby measuring features of the landscape at a large scale) under the constraint that all of the solutions under consideration were still corresponding to zero error (to prevent degrading the performance). For the Fashion-MNIST case, we also measured the generalization error of each solution and the robustness to input noise.
In the random patterns scenario we set
Normalized local entropy as a function of the distance from a reference solution, on a tree-like committee machine with
Spectra of the Hessian for the same solutions of Fig. 4, for various algorithms. The spectra are directly comparable since they are all computed on the same loss function (MSE; using CE does not change the results qualitatively) and the networks are normalized. (Top) The results with the parameter β of the activation functions set to a value such that all solutions of all algorithms are still valid; this value is exclusively determined by the LAL algorithm. (Bottom) The results for a much lower value of β that can be used when removing the LAL solutions, where differences between ceSGD-slow, eLAL and fBP that were not visible al higher β can emerge (the spectrum of LAL would still be the widest by far even at this β).
We then studied the performance of ceSGD (fast and slow settings), LAL, and eLAL on a small fully connected network that learns to discriminate between 2 classes of the Fashion-MNIST dataset (we chose the classes Dress and Coat, which are rather challenging to tell apart but also sufficiently different to offer the opportunity to generalize even with a small simple network trained on very few examples). We trained our networks on a small subset of the available examples (500 patterns, binarized to
Experiments on a subset of Fashion-MNIST. (Top) Average normalized local entropies. (Bottom) Test error vs. maximum eigenvalue of the Hessian spectrum at
We also performed an additional batch of tests on a randomized version of the Fashion-MNIST dataset, in which the inputs were reshuffled across samples on a pixel-by-pixel basis (such that each sample only retained each individual pixel bias while the correlations across pixels were lost). This allowed us to bridge the 2 scenarios and directly compare the local volumes in the presence or absence of features of the data that can lead to proper generalization. We kept the settings of each algorithm as close as possible to those for the Fashion-MNIST tests. Qualitatively, the results were quite similar to the ones on the original data except for a slight degradation of the performance of eLAL compared to ceSGD. Quantitatively, we observed that the randomized version was more challenging and generally resulted in slightly smaller volumes. Additional measures comparing the robustness to the presence of noise in the input (which measures overfitting and thus can be conceived as being a precursor of generalization) confirm the general picture. The detailed procedures and results are reported in SI Appendix.
Conclusions and Future Directions
In this paper, we have generalized the local entropy theory to continuous weights and we have shown that WFM exists in nonconvex neural systems. We have also shown that the CE loss spontaneously focuses on WFM. On the algorithmic side we have derived and designed algorithmic schemes, either greedy (very fast) or message-passing, which are driven by the local entropy measure. Moreover, we have shown numerically that ceSGD can be made to converge in WFM by an appropriate cooling procedure of the parameter which controls the norm of the weights. Our findings are in agreement with recent results showing that rectified linear units transfer functions also help the learning dynamics to focus on WFM (22). Future work will be aimed at extending our methods to multiple layers, trying to reach a unified framework for current DNN models. This is a quite challenging task which has the potential to reveal the role that WFM play for generalization in different data regimes and how that can be connected to the many layer architectures of DNN.
Materials and Methods
1-RSB Formalism to Analyze Subdominant Dense States.
The relationship between the local entropy measure (12) and the 1-RSB formalism is direct and is closely related to the work of Monasson (30). All of the technical details are given in SI Appendix; here we just give the high-level description of the derivation.
Consider a partition function in which the interaction among the real replicas is pairwise (without the reference configuration
Numerical Experiments.
Here we provide the details and the settings used for the experiments reported in Numerical Studies.
In all of the experiments and for all algorithms except fBP we have used a mini-batch size of 100. The mini-batches were generated by randomly shuffling the datasets and splitting them at each epoch. For eLAL, the permutations were performed independently for each replica. Also, for all algorithms except fBP the weights were initialized from a uniform distribution and then normalized for each unit. The learning rate η was kept fixed throughout the training. The parameters γ and β for the ceSGD algorithm were initialized at some values
Parameters for the case of random patterns.
ceSGD-fast:
Parameters for the Fashion-MNIST experiments.
ceSGD-fast:
Data Availability.
The code and scripts for the tests reported in this paper have been deposited at https://gitlab.com/bocconi-artlab/TreeCommitteeFBP.jl.
Acknowledgments
C.B. and R.Z. acknowledge Office of Naval Research Grant N00014-17-1-2569. We thank Leon Bottou for discussions.
Footnotes
↵1C.B. and R.Z. contributed equally to this work.
- ↵2To whom correspondence may be addressed. Email: carlo.baldassi{at}unibocconi.it or riccardo.zecchina{at}unibocconi.it.
Author contributions: C.B. and R.Z. designed research; C.B., F.P., and R.Z. performed research; and C.B. and R.Z. wrote the paper.
The authors declare no competing interest.
This article is a PNAS Direct Submission. Y.T. is a guest editor invited by the Editorial Board.
Data deposition: The code and scripts for the tests reported in this paper have been deposited at https://gitlab.com/bocconi-artlab/TreeCommitteeFBP.jl.
↵*Strictly speaking each cluster is composed of a multitude of exponentially small domains (20).
↵†Not to be confused with the virtual replicas of the replica method.
↵‡All of the details are reported in SI Appendix.
↵§It is actually even simpler; see SI Appendix.
↵¶A quite unlikely fact given that finding isolated solutions is a well-known intractable problem.
This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1908636117/-/DCSupplemental.
- Copyright © 2020 the Author(s). Published by PNAS.
This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
References
- ↵
- D. J. MacKay
- ↵
- ↵
- C. Baldassi,
- A. Ingrosso,
- C. Lucibello,
- L. Saglietti,
- R. Zecchina
- ↵
- C. Baldassi et al.
- ↵
- N. S. Keskar,
- D. Mudigere,
- J. Nocedal,
- M. Smelyanskiy,
- P. T. P. Tang
- ↵
- W. Krauth,
- M. Mézard
- ↵
- J. Ding,
- N. Sun
- ↵
- H. Huang,
- Y. Kabashima
- ↵
- H. Horner
- ↵
- A. Braunstein,
- R. Zecchina
- ↵
- C. Baldassi,
- A. Braunstein,
- N. Brunel,
- R. Zecchina
- ↵
- C. Baldassi
- ↵
- C. Baldassi,
- A. Ingrosso,
- C. Lucibello,
- L. Saglietti,
- R. Zecchina
- ↵
- C. Baldassi,
- F. Gerace,
- C. Lucibello,
- L. Saglietti,
- R. Zecchina
- ↵
- ↵
- H. Schwarze,
- J. Hertz
- ↵
- ↵
- M. Mézard,
- G. Parisi,
- M. Virasoro
- ↵
- ↵
- ↵
- C. Di,
- T. J. Richardson,
- R. L. Urbanke
- ↵
- C. Baldassi,
- E. M. Malatesta,
- R. Zecchina
- ↵
- C. Baldassi et al.
- ↵
- S. Franz,
- G. Parisi
- ↵
- F. Krzakala et al.
- ↵
- W. C. Ridgway
- ↵
- J. T. Tou,
- R. H. Wilcox
- B. Widrow,
- F. W. Smith
- ↵
- G. Mitchison,
- R. Durbin
- ↵
- H. Xiao,
- K. Rasul,
- R. Vollgraf
- ↵
Citation Manager Formats
Sign up for Article Alerts
Article Classifications
- Physical Sciences
- Computer Sciences