New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
- Agricultural Sciences
- Anthropology
- Applied Biological Sciences
- Biochemistry
- Biophysics and Computational Biology
- Cell Biology
- Developmental Biology
- Ecology
- Environmental Sciences
- Evolution
- Genetics
- Immunology and Inflammation
- Medical Sciences
- Microbiology
- Neuroscience
- Pharmacology
- Physiology
- Plant Biology
- Population Biology
- Psychological and Cognitive Sciences
- Sustainability Science
- Systems Biology
Quantifying stochastic uncertainty in detection time of human-caused climate signals
Edited by Mark H. Thiemens, University of California San Diego, La Jolla, CA, and approved August 27, 2019 (received for review March 18, 2019)
This article has a Correction. Please see:

Significance
Climate observations comprise one sequence of natural internal variability and the response to external forcings. Large initial condition ensembles (LEs) performed with a single climate model provide many different sequences of internal variability and forced response. LEs allow analysts to quantify random uncertainty in the time required to detect forced “fingerprint” patterns. For tropospheric temperature, the consistency between fingerprint detection times in satellite data and in 2 different LEs depends primarily on the size of the simulated warming in response to greenhouse gas increases and the simulated cooling caused by anthropogenic aerosols. Consistency is closest for a model with high sensitivity and large aerosol-driven cooling. Assessing whether this result is physically reasonable will require reducing currently large aerosol forcing uncertainties.
Abstract
Large initial condition ensembles of a climate model simulation provide many different realizations of internal variability noise superimposed on an externally forced signal. They have been used to estimate signal emergence time at individual grid points, but are rarely employed to identify global fingerprints of human influence. Here we analyze 50- and 40-member ensembles performed with 2 climate models; each was run with combined human and natural forcings. We apply a pattern-based method to determine signal detection time
Large initial condition ensembles (LEs) are routinely performed by climate modeling groups (1⇓–3). Typical LE sizes range from 30 to 100. Individual LE members are generated with the same model and external forcings, but are initialized from different conditions of the climate system (3). Each LE member provides a unique realization of the “noise” of natural internal variability superimposed on the underlying climate “signal” (the response to the applied changes in forcing).
Because internal variability in a LE is uncorrelated across realizations, averaging over ensemble members damps noise and improves estimates of externally forced signals (Fig. 1). LEs are a valuable test bed for analyzing the signal-to-noise (S/N) characteristics of different regions, seasons, and climate variables (1⇓⇓⇓–5) and for comparing simulated and observed internal variability (6). Such information can inform “fingerprint” studies (7), which seek to identify externally forced climate change patterns in observations (8, 9).
Time series of anomalies in annual-mean atmospheric temperature in the 50- and 40-member CanESM2 and CESM1 ALL ensembles (respectively) and in RSS, STAR, and UAH satellite data. (A–C) Results are for TLS (A), TMT (B), and TLT (C). Temperatures are spatially averaged over areas of common coverage in the simulations and satellite data (82.5°N to 82.5°S for TLS and TMT and 82.5°N to 70°S for TLT). The reference period is 1979 to 1981. STAR does not provide TLT data.
Few previous fingerprint studies have exploited LEs (5, 10). Our study uses LEs to quantify stochastic uncertainty in the detection time of global and hemispheric fingerprint patterns and assesses whether this stochastic uncertainty encompasses the actual fingerprint detection times in observations. We employ LEs performed with 2 different climate models for this purpose. We also compare information from local S/N analysis at individual grid points with results from the S/N analysis of large-scale patterns.
Most fingerprint studies rely on variability from a multimodel ensemble of control runs with no year-to-year changes in external forcings (8, 9, 11). Alternative internal variability estimates can be obtained from externally forced LEs performed with a single model (5). We evaluate here whether these estimates are similar and whether the type and size of external forcing in a LE modulate internal variability (12) and influence
Our focus is on the temperature of the lower stratosphere (TLS), the temperature of the mid- to upper troposphere (TMT), and the temperature of the lower troposphere (TLT), which have not been analyzed in previous LE studies. Satellite-based microwave sounders have monitored the temperatures of these 3 layers since late 1978 (13, 14). We calculate synthetic TLS, TMT, and TLT from a 50-member LE generated with the Canadian Earth System Model (CanESM2) (3, 5, 10) and from a 40-member LE performed with the Community Earth System Model (CESM1) (1, 2). In both LEs, the models were driven by estimated historical changes in all major anthropogenic and natural external forcings (henceforth “ALL”).
For CanESM2, additional 50-member LEs were available with combined solar and volcanic effects only (SV) and with individual anthropogenic forcing by ozone (OZONE), aerosols (AERO), and well-mixed greenhouse gases (GHG). These ensembles allow us to quantify the contributions of different forcings to temperature changes in the CanESM2 ALL experiment. Comparable LEs were not available from CESM1 at the time this research was performed.
Our fingerprint approach relies on a standard method that has been applied to many different climate variables (8, 9, 11, 15, 16). We calculate
Temperature Time Series and Patterns
The observations and CanESM2 and CESM1 ALL LEs are characterized by nonlinear cooling of the lower stratosphere (Fig. 1A). This nonlinearity is primarily driven by changes in ozone (18⇓–20). After pronounced ozone depletion and stratospheric cooling in the second half of the 20th century, controls on the production of ozone-depleting substances led to gradual recovery of ozone and TLS in the early 21st century (18).
Observed lower stratospheric cooling over 1979 to 2018 ranges from −0.30 °C to −0.23 °C per decade (for UAH and RSS, respectively). TLS trends are always smaller in CanESM2, which has an ensemble range of −0.18 °C to −0.14 °C per decade. This difference may be partly due to underestimated ozone loss in the observational dataset used to prescribe CanESM2 ozone changes (21, 22). In CESM1, an offline chemistry–climate model was used to calculate the stratospheric ozone changes prescribed in the ALL simulation (22). The CESM1 ensemble range (−0.27°C to −0.23 °C per decade) includes 2 of the 3 satellite TLS trends.
In the troposphere, observations and both model ALL LEs show global warming over the satellite era (Fig. 1 B and C). Human-caused increases in well-mixed GHGs are the main driver of this signal (8, 9, 23). Simulated tropospheric warming is generally larger than in satellite data. In TMT, for example, the ensemble trend range is 0.35 °C to −0.43 °C per decade for CanESM2 and 0.20 °C to −0.28 °C per decade for CESM1, while the largest observational trend is −0.20 °C per decade.
Multiple factors contribute to these model-observed warming-rate discrepancies. One factor is possible differences between model equilibrium climate sensitivity (ECS) and the true but unknown real-world ECS (24, 25). Differences in anthropogenic aerosol forcing may also play a role (26). Other relevant factors include different phasing of simulated and observed internal variability (27⇓–29), omission in the ALL simulations of cooling from moderate post-2000 volcanic eruptions (27, 30, 31), and residual errors in satellite data (13, 32).
In terms of spatial patterns, both the observations and the model ALL ensembles show large-scale stratospheric cooling and tropospheric warming signals (Fig. 2). Even the CanESM2 and CESM1 ALL realizations with the smallest trends in global-mean temperature exhibit large-scale decreases in TLS and increases in TMT and TLT. There is also model–data agreement in hemispheric features of Fig. 2, such as the common signal of greater tropospheric warming in the Northern Hemisphere (NH).
(A–N) Least-squares linear trends over 1979 to 2018 in annual-mean TLS (A, D, G, J, and M), TMT (B, E, H, K, and N), and TLT (C, F, I, and L). Results are from 1 individual CanESM2 and CESM1 ALL realization (A–F) and from 3 different satellite datasets (RSS, UAH, and STAR) (G–N). Model results display the ensemble member with the smallest global-mean lower stratospheric cooling trend or tropospheric warming trend.
The CanESM2 LEs with individual forcings provide further insights into the causes of atmospheric temperature changes in the CanESM2 ALL simulation. The OZONE, AERO, SV, and GHG LEs confirm the dominant roles of ozone depletion in lower stratospheric cooling and of well-mixed GHG increases in tropospheric warming (SI Appendix and SI Appendix, Figs. S1 and S2).
Local S/N Ratios
We consider next the geographical distribution of local S/N ratios. As defined here and elsewhere (1, 33), local S/N is the ratio between the ensemble-mean trend at grid-point x and the between-realization standard deviation of the trend at x. We focus on trends over 1979 to 2018 in the CanESM2 ALL ensemble (Fig. 3). The CESM1 ALL ensemble shows qualitatively similar patterns of local S/N (SI Appendix, Fig. S3).
Signal, noise, and S/N ratios in the 50-member CanESM2 ALL LE. Results are for TLS (A, D, and G), TMT (B, E, and H), and TLT (C, F, and I). The signal (A–C) is the ensemble-mean trend in annual-mean atmospheric temperature over 1979 to 2018. The noise (D–F) is the standard deviation of the 50 individual temperature trends. G–I show the ratio between signal and noise. Stippling in A–C denotes grid points where S/N exceeds 2.
In the lower stratosphere, the maximum cooling signal is at high latitudes in the Southern Hemisphere (SH), where ozone depletion has been largest (Fig. 3A). Cooling is smaller but significant in the tropics. The between-realization variability of TLS trends is largest over both poles and smallest in the deep tropics (Fig. 3D). These signal and noise patterns explain why TLS S/N ratios reach maximum values in the tropics (Fig. 3G).
The tropics also have advantages for identifying tropospheric warming. As a result of moist thermodynamic processes (11), TMT trends are largest in the tropics. Tropical noise levels are relatively small. This spatial congruence of high signal strength and low noise yields large S/N ratios for tropical TMT trends (Fig. 3 B, E, and H). Because TLT is more directly affected by feedbacks associated with reduced NH snow cover and Arctic sea-ice extent, lower tropospheric warming is largest poleward of 60°N. S/N ratios for TLT trends reach maximum values between 30°N and 30°S, where signal strength is more moderate but noise levels are small (Fig. 3 C, F, and I).
The spatial average of the local S/N results in Fig. 3 is smallest for TLS. This suggests that the forced signal in ALL would be detected latest for TLS. Fingerprinting yields the opposite result, for reasons discussed in SI Appendix.
Fingerprints and Leading Noise Patterns
Our signal detection method relies on a pattern of climate response to external forcing (11). This is the fingerprint
Before discussing fingerprint detection times, it is useful to examine basic features of the fingerprint and noise patterns. The CanESM2 ALL fingerprints capture global-scale lower stratospheric cooling and tropospheric warming (SI Appendix, Fig. S4). CESM1 ALL fingerprints (not shown) are very similar. The fingerprints are spatially dissimilar to smaller-scale, opposite-signed spatial features in the multimodel and single-model noise estimates (SI Appendix, Figs. S5 and S6). This is favorable for signal identification (34).
It was not evident a priori that the multimodel ensemble of CMIP5 control runs and the single-model between-realization variability in CanESM2 would yield similar internal variability patterns. This suggests that the patterns in SI Appendix, Figs. S4 and S5 are primarily dictated by large-scale modes of stratospheric and tropospheric temperature variability that are well captured in CanESM2 and the multimodel ensemble. It is also noteworthy that the magnitude and type of external forcing in the CanESM2 ALL and SV LEs do not appear to significantly modulate the large-scale spatial structure of the leading modes of atmospheric temperature variability. Forced modulation of certain modes of internal variability has been reported elsewhere (12, 35, 36).
Pattern-Based Signal-to-Noise Ratios
We estimate the fingerprint detection time
Fig. 4 shows the individual components of the S/N ratios used to calculate
Signal, noise, and S/N ratios from a pattern-based fingerprint analysis of annual-mean atmospheric temperature changes in the CanESM2 ALL ensemble and in satellite data. Results are for TLS (A, C, and E) and TMT (B, D, and F) and are a function of the trend length L. The domain is near global (
The sharp decrease in signal amplitude for trends ending in the early 1990s reflects stratospheric warming and tropospheric cooling caused by the 1991 Pinatubo eruption (11) (Fig. 1). Pinatubo’s effects are of opposite sign to the searched-for ALL fingerprints (SI Appendix, Fig. S4 A and B). Because the large thermal inertia of the ocean has greater influence on tropospheric than on stratospheric temperature, the signal strength minimum occurs earlier for TLS than for TMT, and recovery of signal strength to pre-eruption levels is faster in TLS than in TMT (compare Fig. 4 A and B).
As the trend-fitting period L increases, the standard deviation of the null distributions of noise trends,
The 4 different sources of internal variability information yield similar values of
For both the ALL ensemble and satellite data, exceedance of the
Detection Time Results
Two types of fingerprint detection time are shown in Fig. 5. The box-and-whiskers plots are detection times calculated solely with model simulation output. The colored crosses denote the times at which model fingerprints are statistically identifiable in satellite temperature data.
(A–F) Detection time
In “model-only”
There are 2 striking features of the model-only results. First, the CanESM2 and CESM1 ALL fingerprints are always detected earlier in the stratosphere than in the troposphere. Across regions, noise estimates, and the 2 model ALL LEs,
A second notable feature of the model-only results is that
In the lower stratosphere, the average value of
Next, we seek to determine
In the troposphere,
Fig. 5 also provides information on the relative detectability of fingerprints in the 3 geographical domains. The most obvious differences are in the troposphere, where fingerprint detection occurs earlier in the NH than in the SH. This result reflects hemispheric differences in land–ocean distribution, heat capacity, and sea ice and snow cover changes.
Detection times for model TLS fingerprints are similar in the 3 satellite datasets (Fig. 5 A and B). This is due to 2 factors: long-term lower stratospheric cooling trends that differ by less than 25% in RSS, STAR, and UAH and the “synchronization” of TLS changes for several years after the Pinatubo eruption. In contrast, detection times for model TMT and TLT fingerprints are (in all but 2 cases) later in UAH than in RSS or STAR. Delayed detection occurs because UAH shows reduced long-term tropospheric warming relative to RSS and STAR (11, 13, 14, 32). Even in UAH data, however, detection of the model-predicted TMT and TLT fingerprints at a
Implications
CanESM2 and CESM1 have ECS values of 3.68 °C and 4.1 °C, respectively. The higher-ECS CESM1 yields global-mean tropospheric warming that is in closer agreement with satellite-derived warming rates. The lower-ECS CanESM2 overestimates observed tropospheric warming (Fig. 1 B and C). In accord with these global-mean results, our pattern-based analysis shows that fingerprint detection times in satellite tropospheric temperature data are more consistent with the range of
Clearly, ECS is not the sole determinant of consistency between fingerprint detection times in observations and in a model LE. The applied forcing must also have significant impact. The cooling associated with (highly uncertain) negative indirect anthropogenic aerosol forcing is substantially greater in CESM1 than in CanESM2 (26). Larger negative aerosol forcing compensates for the larger GHG-induced tropospheric warming arising from the higher ECS of CESM1.
This highlights the need for caution in interpreting apparent consistency between fingerprint detection times in observations and in a large ensemble. Consistency may mask underlying problems with both forcing and response. Such interpretational difficulties are not unique to our study—they also arise in comparing the local “time of emergence” (ToE) of an anthropogenic signal (33, 39, 40) in multiple models or in models and observations.
How might we determine whether CanESM2 or CESM1 has a more realistic estimate of the true tropospheric temperature response to combined anthropogenic and natural external forcing? One way of addressing this question involves applying pattern-based regression methods (8, 9, 41⇓–43) to quantify the strength of the model-predicted GHG and AERO fingerprints in observational data. Ideally, single-forcing GHG and AERO LEs would be available for this purpose. This would facilitate direct comparison of the regression coefficients (typically referred to as “scaling factors”) for the CanESM2 and CESM1 fingerprints. At the time this research was performed, single-forcing GHG and AERO LEs were available from CanESM2 but not CESM1. Comparison of scaling factors in the 2 models was not feasible.
Scaling factor estimates are available for CanESM2. Three independent studies with hydrographic profiles of temperature and salinity (5), tropospheric temperature (44), and surface temperature (45) suggest that the CanESM2 GHG signal may be larger than in observations. This is in accord with the larger than observed tropospheric warming found here (Fig. 1 B and C). The evidence is more equivocal regarding the question of whether the CanESM2 anthropogenic aerosol signal is larger or smaller than in observations (SI Appendix).
In addition to such statistical analyses, it is imperative to improve our physical understanding of the forcing by (and response to) anthropogenic aerosols, particularly for aerosol indirect effects (26). Prospects for progress are promising. Results from relevant CMIP6 single-forcing simulations (and in some cases LEs) performed under the Detection and Attribution Model Intercomparison Project are now available for analysis by the scientific community (46). The Radiative Forcing Model Intercomparison Project will provide estimates of aerosol direct and indirect forcing from participating models (47). Finally, improved methods of diagnosing and comparing model-based and observationally based estimates of indirect forcing have the potential to reduce uncertainty in aerosol effects on climate (48, 49).
In terms of reducing climate sensitivity uncertainties, there are now more mature strategies for evaluating the robustness and physical plausibility of a wide array of “emergent constraints” on ECS (50⇓–52). Additionally, Bayesian inference strategies are being employed for combining information from the often divergent results obtained with different emergent constraints.
The developments mentioned above provide grounds for cautious optimism regarding scientific prospects for narrowing uncertainties in aerosol forcing and ECS. In the future, we could (and should) be able to make more informed assessments of the relative plausibility of the CanESM2 and CESM1 fingerprint detection times found here.
Materials and Methods
Satellite Atmospheric Temperature Data.
We used gridded, monthly mean satellite atmospheric temperature data from RSS (13), STAR (14), and UAH (17). RSS and UAH provide satellite measurements of TLS, TMT, and TLT. STAR produces TLS and TMT data only. Temperature data were available for January 1979 to December 2018 for versions 4.0 of RSS, 4.1 of STAR, and 6.0 of UAH.
CanESM2 Model Output.
We analyzed simulation output from five 50-member LEs: ALL, SV, AERO, OZONE, and GHG. For the first 4 ensembles, CanESM2 was run over the 1950 to 2005 period with forcing by ALL, SV, AERO, and OZONE. After 2005 each of these 4 numerical experiments continued with the forcing appropriate for that ensemble from the Representative Concentration Pathway 8.5 (RCP8.5) scenario (38). The ALL, AERO, and OZONE LEs end in 2100; the SV ensemble ends in 2020 (3⇓–5). Initialization of individual members for these 4 ensembles is described in SI Appendix.
CanESM2 did not perform a LE with historical changes in well-mixed GHGs alone. The GHG signal can be reliably estimated by subtracting from each ALL realization the local time series of the sum of the SV, AERO, and OZONE ensemble means (5).
CESM1 Model Output.
The 40-member CESM1-CAM5 ALL ensemble is described in detail elsewhere (1, 2). The initial 30-member ensemble was augmented with 10 additional realizations. All realizations have the same historical forcing from 1920 to 2005 and RCP8.5 forcing from 2006 to 2100. As for the CanESM2 LEs, only the atmospheric initial conditions were varied by imposing small random differences on the air temperature field of realization 1 (2).
CMIP5 Model Output.
One of our estimates of internal variability relied on multimodel output from CMIP5 (53). We analyzed 36 different preindustrial control runs with no year-to-year changes in external forcings. Control runs analyzed are listed in ref. 11.
Method for Correcting TMT Data.
Trends in TMT estimated from microwave sounders receive a large contribution from lower stratospheric cooling (54). We used a standard regression-based method to remove the bulk of this cooling component from TMT (SI Appendix).
Fingerprint Method.
Our fingerprint method relies on an estimate of the fingerprint
Noise Estimates.
We rely on 4 different internal variability estimates, referred to here as CMIP5, ALL1, ALL2, and SV. Details of these estimates are given in SI Appendix.
Data Availability.
All primary satellite and model temperature datasets used here are publicly available. Synthetic satellite temperatures calculated from model simulations are provided at https://pcmdi.llnl.gov/research/DandA/.
Acknowledgments
We acknowledge the World Climate Research Programme’s Working Group on Coupled Modeling, which is responsible for CMIP, and we thank the climate modeling groups for producing and making available their model output. For CMIP, the US Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. We also acknowledge the Environment and Climate Change Canada’s Canadian Center for Climate Modeling and Analysis for executing and making available the CanESM2 Large Ensemble simulations used in this study and the Canadian Sea Ice and Snow Evolution Network for proposing the simulations. Helpful comments were provided by Stephen Po-Chedley, Neil Swart, Mike MacCracken, and Ralph Young. Work at Lawrence Livermore National Laboratory (LLNL) was performed under the auspices of the US Department of Energy under Contract DE-AC52-07NA27344 through the Regional and Global Model Analysis Program (to B.D.S., J.F.P., M.D.Z., and G.P.) and the Early Career Research Program Award SCW1295 (to C.B.). The views, opinions, and findings contained in this report are those of the authors and should not be construed as a position, policy, or decision of the US Government or the US Department of Energy.
Footnotes
- ↵1To whom correspondence may be addressed. Email: santer1{at}llnl.gov.
Author contributions: B.D.S., J.C.F., and S.S. designed research; B.D.S. and J.F.P. performed research; B.D.S., J.C.F., S.S., and M.D.Z. analyzed data; B.D.S., J.C.F., S.S., C.B., and G.P. wrote the paper; and J.C.F. contributed model simulation output.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1904586116/-/DCSupplemental.
- Copyright © 2019 the Author(s). Published by PNAS.
This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
References
- ↵
- C. Deser,
- A. S. Phillips,
- M. A. Alexander,
- B. V. Smoliak
- ↵
- J. E. Kay et al.
- ↵
- ↵
- N. C. Swart,
- J. C. Fyfe,
- E. Hawkins,
- J. E. Kay,
- A. Jahn
- ↵
- N. C. Swart,
- S. T. Gille,
- J. C. Fyfe,
- N. P. Gillett
- ↵
- K. A. McKinnon,
- A. Poppick,
- E. Dunn-Sigouin,
- C. Deser
- ↵
- K. Hasselmann
- ↵
- S. Solomon et al.
- G. C. Hegerl et al.
- ↵
- T. F. Stocker et al.
- N. L. Bindoff et al.
- ↵
- M. C. Kirchmeier-Young,
- F. W. Zwiers,
- N. P. Gillett
- ↵
- B. D. Santer et al.
- ↵
- M. P. King,
- F. Kucharski,
- F. Molteni
- ↵
- C. Mears,
- F. J. Wentz
- ↵
- C. Z. Zou,
- M. D. Goldberg,
- X. Hao
- ↵
- P. J. Gleckler et al.
- ↵
- K. Marvel,
- C. Bonfils
- ↵
- R. W. Spencer,
- J. R. Christy,
- W. D. Braswell
- ↵
- S. Solomon et al.
- ↵
- V. Ramaswamy et al.
- ↵
- V. Aquila et al.
- ↵
- S. Solomon,
- P. J. Young,
- B. Hassler
- ↵
- V. Eyring et al.
- ↵
- T. R. Karl,
- S. J. Hassol,
- C. D. Miller,
- W. L. Murray
- ↵
- ↵
- ↵
- M. D. Zelinka,
- T. Andrews,
- P. M. Forster,
- K. E. Taylor
- ↵
- J. C. Fyfe et al.
- ↵
- G. A. Meehl,
- H. Teng,
- J. M. Arblaster
- ↵
- K. E. Trenberth
- ↵
- S. Solomon et al.
- ↵
- ↵
- S. Po-Chedley,
- T. J. Thorsen,
- Q. Fu
- ↵
- K. B. Rodgers,
- J. Lin,
- T. L. Frölicher
- ↵
- B. D. Santer et al.
- ↵
- J. M. Arblaster,
- G. A. Meehl
- ↵
- ↵
- S. Solomon et al.
- ↵
- M. Meinshausen et al.
- ↵
- K. M. Keller,
- F. Joos,
- C. C. Raible
- ↵
- J. Li,
- D. W. J. Thompson,
- E. A. Barnes,
- S. Solomon
- ↵
- ↵
- P. A. Stott et al.
- ↵
- ↵
- F. C. Lott et al.
- ↵
- N. P. Gillett,
- V. K. Arora,
- D. Matthews,
- P. A. Stott,
- M. R. Allen
- ↵
- N. P. Gillett et al.
- ↵
- R. Pincus,
- P. M. Forster,
- B. Stevens
- ↵
- J. C. Golaz et al.
- ↵
- ↵
- B. Stevens,
- S. C. Sherwood,
- S. Bony,
- M. J. Webb
- ↵
- V. Eyring et al.
- ↵
- A. Hall,
- P. Cox,
- C. Huntingford,
- S. Klein
- ↵
- ↵
Citation Manager Formats
Sign up for Article Alerts
Article Classifications
- Physical Sciences
- Earth, Atmospheric, and Planetary Sciences