Skip to main content

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home
  • Log in
  • My Cart

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
Research Article

Model confirmation in climate economics

Antony Millner and Thomas K. J. McDermott
  1. aGrantham Research Institute on Climate Change and the Environment, London School of Economics and Political Science, London WC2A 2AE, United Kingdom;
  2. bSchool of Economics, University College Cork, Cork T12 YN60, Ireland

See allHide authors and affiliations

PNAS August 2, 2016 113 (31) 8675-8680; first published July 18, 2016; https://doi.org/10.1073/pnas.1604121113
Antony Millner
aGrantham Research Institute on Climate Change and the Environment, London School of Economics and Political Science, London WC2A 2AE, United Kingdom;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: a.millner@lse.ac.uk
Thomas K. J. McDermott
aGrantham Research Institute on Climate Change and the Environment, London School of Economics and Political Science, London WC2A 2AE, United Kingdom;
bSchool of Economics, University College Cork, Cork T12 YN60, Ireland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  1. Edited by M. Granger Morgan, Carnegie Mellon University, Pittsburgh, PA, and approved June 7, 2016 (received for review March 19, 2016)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

Benefit–cost integrated assessment models (BC-IAMs) combine climate science, impacts studies, and representations of long-run economic growth to estimate the costs and benefits of climate policy. They provide valuable qualitative insights into how policy outcomes might depend on ethical and empirical assumptions. Increasingly, however, BC-IAMs are being used to inform quantitative policy choices, despite their economic components being largely untested, or untestable, over relevant time scales. We demonstrate the potential benefits of model confirmation exercises for policy applications, demonstrating that the economic growth model used by a prominent BC-IAM had little predictive power over the 20th century. Insofar as possible, out-of-sample empirical tests of the economic components of BC-IAMs should inform their future development for real-world applications.

Abstract

Benefit–cost integrated assessment models (BC-IAMs) inform climate policy debates by quantifying the trade-offs between alternative greenhouse gas abatement options. They achieve this by coupling simplified models of the climate system to models of the global economy and the costs and benefits of climate policy. Although these models have provided valuable qualitative insights into the sensitivity of policy trade-offs to different ethical and empirical assumptions, they are increasingly being used to inform the selection of policies in the real world. To the extent that BC-IAMs are used as inputs to policy selection, our confidence in their quantitative outputs must depend on the empirical validity of their modeling assumptions. We have a degree of confidence in climate models both because they have been tested on historical data in hindcasting experiments and because the physical principles they are based on have been empirically confirmed in closely related applications. By contrast, the economic components of BC-IAMs often rely on untestable scenarios, or on structural models that are comparatively untested on relevant time scales. Where possible, an approach to model confirmation similar to that used in climate science could help to build confidence in the economic components of BC-IAMs, or focus attention on which components might need refinement for policy applications. We illustrate the potential benefits of model confirmation exercises by performing a long-run hindcasting experiment with one of the leading BC-IAMs. We show that its model of long-run economic growth—one of its most important economic components—had questionable predictive power over the 20th century.

  • climate policy
  • integrated assessment
  • model confirmation
  • structural uncertainty
  • economic growth

Prediction is very difficult, especially about the future.

Niels Bohr

A little over 20 years ago a seminal article on the interpretation of numerical models in the earth sciences appeared in a leading scientific journal (1). The authors argued that although verification and validation of these models is strictly logically impossible, model confirmation is a necessary and desirable step. In the intervening years an impressive body of work in climate science has compared the predictions of global climate models with observations. Chapter 9 of the Intergovermental Panel on Climate Change’s Fifth Assessment report summarizes recent work (2), stating that “model evaluation … reflects the need for climate models to represent the observed behaviour of past climate as a necessary condition to be considered a viable tool for future projections.” Scientists continue to use empirical tests of climate models to refine and improve them, while also reflecting on the methodological questions that arise when interpreting model predictions to inform decision making (3).

Climate models are, however, only a part of the technical apparatus that has been developed to inform climate policy decisions. Integrated assessment models (IAMs) provide the link between physical science and policy. IAMs come in two varieties: benefit–cost models, which attempt to estimate the aggregate costs and benefits of climate policy to society, and detailed process models, which usually analyze more detailed policies in a cost-effectiveness framework (i.e., assuming an exogenous policy objective), often in much greater sectoral detail than the highly aggregated benefit–cost models (4). We focus on benefit–cost IAMs (BC-IAMs) in this article, because these have been the focus of research activity in economics (5, 6) and are increasingly influential in policy applications.

BC-IAMs couple simplified climate models with representations of the global economy in an attempt to understand the trade-offs between alternative policy options. They have been applied to a wide variety of questions. How might different welfare frameworks affect the attractiveness of policy options (7)? Which approaches to international agreements are likely to succeed (8)? How might different policy instruments affect innovation in energy technologies (9)? These modeling exercises provide valuable insights into the possible qualitative differences between policy options. However, their quantitative implications are conditional on the veracity of the underlying models. BC-IAMs can be used to show that policy A leads to higher welfare than policy B in model X, but to extend this model-based finding to claims about reality we need to know how well X approximates reality. Are the equations and initialization procedures used by model X structurally sound, and, if not, what risks might we run by treating them as if they are?*

Assessing the structural soundness of economic modeling assumptions in BC-IAMs has recently become an increasingly pressing issue, because they are beginning to be used to inform quantitative real-world policy decisions. For example, the US government has recently established an interagency working group (10, 11) to estimate a value of the social cost of carbon (SCC), the welfare cost to society from emitting a ton of CO2. The value of the SCC that was adopted will form part of the cost–benefit assessment of all federal projects and policies, and thus has the potential to influence billions of dollars of investment. The process used to establish a value for the SCC relied heavily on BC-IAMs, the first time they have directly informed quantitative federal rules. Whereas the US SCC estimate is perhaps the most prominent recent example, other governments and international organizations are also increasingly turning to BC-IAMs to inform policy choices.

As soon as a model is used to inform quantitative policy decisions, the criteria by which it must be judged become more demanding. A given model may be a useful tool for exploring the qualitative implications of different assumptions, but in order for it to be profitably applied to policy choices we need to know how plausible those assumptions are as empirical hypotheses about how the world works. If a model can be shown to be structurally flawed in hindcasting exercises, our expectation should be that similar errors might occur when using it to make predictions that inform policy choices today. No model is perfect, and we should not expect any given model to ensure us against regret entirely. However, to the extent possible, it is in our interests to attempt to ascertain how wrong we might go when relying on a model to make decisions. As has occurred in climate science, this exercise could build confidence in those economic modeling assumptions that are found to be consistent with empirical data, and focus attention on those assumptions that may require refinement for policy applications.

Importantly, confirmation exercises provide entirely different information about a model’s validity from model calibration, sensitivity analysis, probabilistic approaches to quantifying parametric uncertainty, or expert elicitation of model parameters, all of which are standard practice in the field. These uncertainty quantification methods explore the space of model outcomes (and perhaps estimate their likelihood), taking the model’s structural assumptions as given. Model confirmation, however, tests whether the equations and initialization procedures a model uses to generate predictions are able to provide a good representation of observed outcomes. A model whose outcome space has been explored using the uncertainty quantification methods mentioned above may still yield error-prone predictions if the underlying modeling assumptions are not a good fit to reality. Although these methods can of course generate a distribution of model outcomes, whether or not such distributions reflect the uncertainty we actually face depends on the structural soundness of the model used to generate them. We note that model confirmation is only possible if a model is specified in a self-contained manner (i.e., it is composed of a set of structural assumptions and free parameters that can, at least in principle, be estimated from data). This makes models that rely on fixed external scenarios for generating predictions very difficult to confirm ex ante. Although such models could provide a good characterization of current uncertainty, we have no way of assessing whether this is likely to be the case by testing their past performance.

Although the physical science models upon which the scientific components of BC-IAMs are based have often been subjected to tests of structural validity, their economic components are often either based on untestable exogenous scenarios, or on structural modeling assumptions that are largely untested on the temporal scales that are relevant to climate applications. In part this reflects genuine data difficulties, which make some economic assumptions in BC-IAMs very difficult to confirm. For example, BC-IAMs assume a functional form for the climate damage function, which quantifies the impact of global average temperature changes on the aggregate productivity of the economy. BC-IAM results are highly sensitive to the rate of increase of damages with temperature at high temperatures, but because we have only seen a small amount of average warming so far, it is very difficult to test any assumed functional form for damages. Some of the most important economic components of some BC-IAMs are, however, amenable to empirical tests.

An Out-of-Sample Test of a Model of Long-Run Economic Growth

To demonstrate what may be learned from model confirmation exercises we focus on the economic growth model used by the well-known Dynamic Integrated Climate-Economy (DICE) BC-IAM (15). The assumptions BC-IAMs make about long-run economic growth have a very substantial effect on leading policy outputs such as the SCC. This is because economic growth strongly affects the path of greenhouse gas emissions, the magnitude of climate damages, and the wealth of future generations, all key determinants of the aggregate costs and benefits of climate policy. Unlike other well-known BC-IAMs [e.g., refs. 16 and 17], which rely on external scenarios for economic growth that are impossible to test empirically ex ante, DICE uses an explicit model of economic growth that makes it well-suited to empirical testing, and is also widely deployed across climate economics (see, e.g., ref. 18). A crucial part of this growth model is a model of the temporal evolution of total factor productivity (TFP), a quantity that sets the overall level of technological advancement in the economy. Economic growth is largely driven by technological progress in DICE. Thus, although policy evaluation in DICE is also highly sensitive to other structural modeling assumptions (e.g., the shape of the damage function and the evolution of abatement costs), a lot depends on how it models overall technological progress.†

To test the structural assumptions and initialization procedures used by DICE’s economic growth model, we consider the following question: How would this model fare if we asked it to predict the growth path of a major economy over the 20th century? This question is closely analogous to those asked of climate models by climate scientists (2). The model of the evolution of the economy DICE employs is a version of the Ramsey neoclassical growth model, familiar to any student of macroeconomics (see ref. 19 for a detailed exposition). In this model economic output is generated by competitive firms and is either consumed or reinvested in firms. Firms produce output via a production technology, which uses the capital and labor supplied by consumers as inputs. In DICE technological progress is modeled as an increasing trend in TFP, which acts as a multiplier on firms’ production technologies. Thus, as TFP grows, and the technologies of production become more advanced, fewer capital and labor inputs are required to generate a given level of economic output. A specific model of the time dependence of TFP is assumed in DICE. This model depends on free parameters that can be estimated from economic history.‡

We test this model’s predictive performance using recently compiled data on the US economy from 1870 to 2010 (21). We single out the United States because it is the largest economy for which detailed long-run economic data are available, and because of its position at the technological frontier over much of the 20th century (22). Our tests are as generous as possible to the model (e.g., we assume a perfect forecast of labor supply) and stick closely to the calibration and forecasting methodology used by DICE (details of the model implementation are available in Supporting Information). To test the model’s predictive power we divide the data into different training and verification windows; 95% confidence intervals (CI) for the parameters of the TFP model are inferred from the training data. The state equation for the capital stock and empirically estimated model of TFP evolution are then used to predict economic output.

Fig. 1 is illustrative of our model estimation and confirmation methodology. The figure depicts a long-run forecast of TFP and economic output obtained by estimating the TFP model on the 50 y of data from 1870 to 1920. Fig. 1, Left depicts the fit of the TFP model to the training data and its out-of-sample projection of TFP. Fig. 1, Right shows an out-of-sample projection of gross domestic product (GDP) at 1920, which is generated using the empirically estimated TFP model, the state equation for the evolution of the capital stock, and a perfect forecast of labor supply. The figure shows that although the TFP model fits the training data well, its out-of-sample forecast substantially underestimates technological progress in the latter half of the 20th century. These errors are compounded for GDP projections, because persistent underestimates of TFP affect predicted investment flows and capital formation in each future period, which further bias the model downward. We note, however, that although the presence of model errors is significant, the fact that the model was downward-biased in 1920 does not imply that it will be downward-biased in all periods, as we demonstrate below.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Long-run forecasts of US TFP (Left) and economic output (Right) from the economic growth model used by DICE. The model was trained on the data from 1870 to 1920 and projections made in 1920. Solid red lines are realized data values. Dashed black lines are forecasts generated using model parameter values at the boundary of estimated 95% CI, and solid black lines (Left) show the fit of the TFP model to the data over the training window.

Although the out-of-sample forecasts of TFP and GDP the model generates in 1920 are not successful, they nevertheless look reasonable when viewed from the perspective of the 50-y data series up to that year. Had economists produced these predictions at the time based on only these 50 y of data, they would no doubt have been perceived as plausible future growth scenarios. From today’s perspective, however, the model looks like a less-reliable predictive tool. An important reason why the model performs poorly is that the post-World War Two boost in productivity growth is not presaged in the training data at 1920. This finding is indicative of the difficulty of predicting long-run technological developments. We face precisely the same difficulties today when using BC-IAMs to project economic growth into the next century and beyond (see Supporting Information for further discussion).

Whereas Fig. 1 suggests that the model’s long-run predictive performance could be a concern, it focuses on only a single forecasting date, 1920. In addition, even if the model’s long-run predictions are flawed, it could be a useful predictive tool on intermediate time scales (e.g., 30 y), where unpredictable technological jumps are less likely to make past data unrepresentative of future outcomes. Although 30 y may seem a short forecast horizon for a problem as long-lived as climate change, fully 50% of the value of the SCC is determined by outcomes over this period under some parameterizations of the DICE model (23).§ To address these issues, Fig. 2 extends the analysis of Fig. 1, summarizing the model’s predictive performance at each year in the data series, for 30- and 50-y training and confirmation windows. For each year in the period 1900–1980 (1920–1960) the model was trained on the previous 30 (50) y of data, and the estimated model used to forecast GDP for the next 30 (50) y. Although the model performs well in some 30-y periods, in most years the realized growth outcome falls outside of the forecasted interval. Arguably, the model is thus not a successful predictive tool on this shorter time scale, despite less sensitivity to large unpredictable shifts in the technological frontier. For 50-y forecasts, the model performs well post-World War II, but poorly in the prewar period. This shows that the illustration of model performance depicted in Fig. 1 is not exceptional. The 50-y forecasts further illustrate the sensitivity of growth projections to structural breaks of the kind that followed the war and demonstrate the value of a long historical perspective. Had we only evaluated the model on the most recent 60 y of data we would likely have overestimated its long-run predictive performance.

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Thirty-year (Left) and 50-y (Right) forecasts of US economic growth from the economic growth model used by DICE. (Left) The solid red line at date T denotes the realized compound annual growth rate over the period [T,T+30]. The dashed black lines at date T denote the forecasted interval for the growth rate over [T,T+30] when the model is trained on data from [T−30,T], using the same estimation and prediction methodology as in Fig. 1. (Right) The equivalent for 50-y training and verification windows.

Our analysis suggests that the version of the neoclassical growth model that DICE relies on could be subject to structural errors on the temporal scales relevant to climate polices. The Ramsey growth model, and more complex models that endogenize the process of technical change, have been profitably applied to a variety of empirical questions in macroeconomics. It is thus important to understand how the use of these models in DICE and other climate applications differs from their more standard empirical applications. Growth models are usually used in empirical applications to explain cross-country differences between the historical growth paths of different countries. In BC-IAMs these models are used to predict the absolute level of global or regional economic output over the coming centuries. When neoclassical growth models are used to explain differences in past outcomes across countries, technical change, in the form of the growth rate of TFP, is formally nothing more than a residual in a linear regression. It is the part of empirical growth data that is not explained by the endogenous factors in the model (i.e., the productivity of capital and labor). If, however, such models are used to make predictions, as in DICE, the future realizations of TFP must also be predicted. This requires us to posit an explicit quantitative model of the evolution of TFP over the coming decades. However, we have no law-like theory of long-run technical change that parallels the predictive successes that have been achieved in the physical sciences (24). This seems unlikely to change in the near future, and more sophisticated models that endogenize the process of technological change also seem unlikely to provide high-powered predictive tools, despite their more nuanced representation of its causes.¶ Just as natural selection explains differences in species’ phenotypes without predicting future adaptations, so growth theory has proven to be an insightful tool for explaining the causal determinants of cross-country difference in historical growth outcomes. Prediction, however, is a different matter.

Implications for the Development and Use of BC-IAMs in Quantitative Policy Applications

What can be concluded from this first example of a model confirmation exercise for the economic components of a BC-IAM? Of course, DICE is a single (albeit prominent) example of a BC-IAM, and its implementation of the Ramsey model a single (albeit frequently deployed) representation of the process of long-run economic growth. Our findings are not necessarily representative of how other growth models might fare in similar confirmation exercises, the structural validity of other BC-IAMs, or the performance of their economic components. Our point, however, is that because most of the economic components of BC-IAMs have not, to our knowledge, been subjected to empirical tests of structural validity using historical data series, we do not know what their empirical status is for quantitative policy applications. Without testing models in hindcasting tasks closely related to the uses we wish to put them to today, we cannot gauge the extent of any possible model errors.

We close with four recommendations for the testing and use of BC-IAMs in policy applications. First, those components of BC-IAMs whose structural properties can be meaningfully tested using historical data should be. Confirmation exercises can build confidence in model components that perform well historically and indicate the range of model parameters that needs to be considered for a given model to have a chance of making sensible out-of-sample forecasts. If no such parameter range can be identified from such an exercise, the model’s structural assumptions might need to be revised for the purposes of policy applications. Calibration (a within-sample exercise) and parametric uncertainty quantification techniques are not substitutes for this procedure, because we need to test the out-of-sample performance of a model’s structural assumptions. This is not to say, however, that untested or structurally flawed models cannot be useful for illustrating qualitative conceptual points about alternative modeling assumptions. Many BC-IAMs are used profitably for this purpose in the academic community today, and none of our points undermines the value of such modeling exercises if they are interpreted with sufficient carefulness. The hurdles a model must jump over to qualify for application to quantitative policy choices should, however, be more demanding.

Second, the economic components of BC-IAMs should be based on testable structural hypotheses, in so far as this is possible. The confirmation exercise we conducted with the DICE model was only possible because the model is specified in a self-contained manner, and with sufficient structural detail to allow it to be meaningfully compared with data. This is a great virtue of the DICE model. Although in this case our confirmation exercise raised the possibility of quantitatively meaningful errors when applying this model to policy questions, we were at least able to ask (and partially answer) the question, What risks might we run by assuming that the world behaves in line with the model? Models that rely more heavily on external scenarios for key economic components do not allow for this kind of empirical testing. If a model component cannot be tested, we cannot hope to gain confidence in it ex ante, even if it in fact turns out to perform well ex post. Thus, although a set of exogenous scenarios could turn out to capture our underlying uncertainty, we can never estimate what risks we might run by assuming this to be the case when making decisions today. The practice of making the assumptions in BC-IAMs testable could help to build confidence in their outputs and filter out plausible from implausible structural assumptions.#

Third, policy choices should be based on estimates from many plausible, structurally distinct, models. As we have noted there are many important aspects of BC-IAMs that we cannot hope to test empirically today, because the relevant verification data will only be realized long after current policies are enacted. There is thus substantial irreducible uncertainty about some of the core structural relationships in BC-IAMs. Exploring a wide range of structural assumptions—not just about overall technological change and economic growth but also about climate damages and abatement costs—is crucial if we wish the policy prescriptions from modeling exercises to more accurately reflect the extent of our uncertainty about the consequences of climate policies. In our view, and that of others (5, 29), the set of BC-IAMs that are commonly applied in current policy analysis may underestimate the risks of inaction on climate change, in part because of a comparative lack of structural heterogeneity.

Fourth, the decision tools that are used to select policies should reflect the fact that our models are at best tentative predictive tools. Modern decision theory has developed a rich suite of tools for rational decision making under deep uncertainty that allow us to express our lack of confidence in model output, yet most policy analysis with BC-IAMs still relies on decision tools that treat the uncertainty in climate policy as if it were of the same character as tossing a coin or rolling a die (30). We should instead accept the limits of our knowledge and use decision tools that fit the profoundly uncertain task at hand.

An Abstract Definition of Structural Soundness for IAMs

An integrated assessment model is a map F from initial conditions I, empirical estimates of parameter values P, ethical assumptions E, and policies Π, to consequences CE: F(Π|I,P;E)=CE. The subscript E in CE indicates that these are consequences deemed to be relevant according to the ethical assumptions E. The consequence space may be multidimensional and include distributions over, for example, time-dependent streams of consumption. All or some of I, P, and C(E) may take values in the space of probability distributions over more primitive spaces, or another mathematical structure capable of representing uncertain knowledge. The policy space may be equally rich; for example, it could include straightforward “open-loop” policies, as well as more sophisticated “feedback” policies that respond to new information when it becomes available. Now let RW(Π;E) be the “real-world” mapping of policy choices to consequences. This map does not depend on initial conditions or parameter values (because these are both fixed in the real world), but it may still take values in the space of probability distributions over primitive consequences, owing to measurement error, missing observations, and so on. In addition, define a “similarity metric” d(C,C′) on the space of consequences. This function should represent the similarity between two consequences. So, for example, if consequences take values in the space of probability distributions over a metric space, d(C,C′) could be the Prohorov metric. We say a model F is structurally sound on a set of policies Ω if∀Π ∈ Ω,d[F(Π|I,P;E),RW(Π;E)]<ϵ,where ε captures our tolerance for structural errors on the policy set Ω. This definition ensures that the structural soundness of a model is always defined relative to its ethical assumptions. Thus, for example, if we adopted an ethical position that assigned zero weight to future generations a model would be considered structurally sound if its near-term predictions were always accurate, even if its long-term predictions were not. Note that in practice we only ever observe the value of RW(Π;E) for one observed policy Π0. Thus, in its most demanding form (i.e., without auxiliary assumptions that allow plausible counterfactuals to be constructed) the process of model confirmation is restricted to cases where Ω is a singleton set. Although this makes it impossible to establish structural soundness on larger sets, the definition may still be fruitfully deployed to show that a given model is not structurally sound on any set Ω that contains Π0. This distinction encapsulates the difference between model confirmation and model validation. A model is confirmed if it has not been shown to be structurally unsound. However, no model can ever by “validated” in an open system (1).

The Model

The economic growth model that DICE employs is a version of the Ramsey neoclassical growth model. Firms at time t produce economic output Yt, using capital Kt and labor Lt as factors of production. The aggregate production technology of firms in the economy is a function Ft(Kt,Lt), which maps factor inputs to economic output. An amount Ct of output is consumed, and the remainder is reinvested in firms. Capital is assumed to depreciate at a constant rate δK. The stock of capital Kt thus evolves according toKt+1=Yt(Kt,Lt)+(1−δK)Kt−Ct.[S1]The DICE model makes several assumptions about functional forms, parameterizations, and data inputs in its implementation of the model. Most important is its assumption about the production technology. Yt(Kt,Lt) is assumed to be of Cobb–Douglas form and to include an exogenous time varying TFP multiplier, At:Yt(Kt,Lt)=AtKtαLt1−α.[S2]The parameter α is the capital share of production, which is assumed to be constant and equal to 0.3 in DICE. In our simulations we adopt α=0.25, because this is the long-run average capital share in our data (20). Sensitivity experiments on α suggest that α=0.3, provides substantially worse forecasts in our data.

DICE 2013 adopts the following model for the evolution of At:At=At−1(1+gt);gt=gt−1(1+δA);g0=initial growth rate.[S3]

Training.

Once a window of training data is identified, the goal is to estimate the parameters of the growth process for TFP. Throughout the analysis we assume that δA=0. This is a conservative assumption, because allowing δA to be nonzero and estimating both g0 and δA from data yields substantially worse predictions. This is partly due to standard overfitting effects, and partly to the fact that for many training windows the estimated value of δA is negative, leading to superexponential growth projections (i.e., growth rates that grow exponentially) that far exceed realized values. Thus, our model of the evolution of At isAt=(1+g0)tA0.[S4]The training data consist of a set of time series for Yt,Kt, and Lt. For each t in the training window, the inferred value of At is given byAt=Yt/(KtαLt1−α).[S5]The inferred values of {At} are then fit with the exponential function (Eq. S4) to obtain 95% confidence intervals on the parameters g0,A0 from the training data. We treat A0 as a free parameter because “true” predictive TFP values are measured with error in practice, and to give the model greater flexibility to match the data. This is again a conservative assumption; a model that only estimates g0 from the training window and uses the current value of At to generate predictions [i.e., At+n=(1+g0)nAt] fits past data worse and yields narrower confidence bounds and worse predictions out of sample.

Verification.

Given initial values of capital and labor in the verification data, and the estimated values of g0,A0 from the training data, the state equation for capital (Eq. S1) is used to forecast a time series for future capital stocks. To do this we require values of Lt,Ct, and At. We assume a perfect forecast of labor supply, and thus the actual realized values of Lt in the verification data are used in the forecasting step. This is a conservative assumption with respect to predictive performance and focuses attention on the model’s representation of technical change.

In the standard formulation of the DICE model consumption Ct is chosen to optimize an intertemporal utility integral, which represents the welfare of current and future generations. This makes the values of Ct dependent on the parameters of this utility function, in particular the elasticity of marginal utility, and the pure rate of time preference. In practice the sequence of savings rates that emerges from this exercise is approximately constant. To avoid having to calibrate the parameters of the welfare function to match past savings data, we assume the savings rate is constant. Thus, consumption Ct is assumed to be a constant fraction (1−s) of output: Ct=(1−s)Yt. The savings rate s is set at 22%. This represents a long-run average consistent with historical data and is a commonly used value in the integrated assessment literature (see ref. 17 for justification and discussion of the constant savings rate assumption). Sensitivity experiments suggest that the savings rate needs to be in excess of 85% to cause the verification to fall inside the confidence interval of a model forecast made in 1920, given inferred growth rates of TFP. This is more than three times the average savings rates we observe in the data.

Finally, to specify the values of At we use the estimated values of g0,A0 from the training data and Eq. S4 to generate a forecast of TFP. Once a forecasted time series of capital stocks and TFP is generated, a GDP forecast is obtained by combining the estimated value of At with the forecasted value of Kt and the actual value of Lt according to the relationship (Eq. S2).

The Data

Our model confirmation exercise is made possible by recent estimates of long-run capital stocks for major world economies by Piketty and Zucman (21). Ideally we would conduct this exercise using global data. However, long-run global estimates of capital stocks do not exist. Instead, we focus our analysis on the United States, given its status since the early 20th century as the global frontier economy (21). In theory, global TFP growth is a function of technical progress in the frontier economy, with other regions “converging” toward the frontier. This is the approach taken by Nordhaus in analysis using the regional version of the DICE IAM (31). He projects technological change for the United States, as the frontier region, and assumes that other countries converge partway to the frontier. As it happens, Piketty and Zucman argue, based on analysis of their historical data, that “as a first approximation, productivity growth is the same everywhere in the rich world” (ref. 21, p.1285). The Piketty–Zucman data are particularly well-suited to our exercise, because they enable us to meaningfully test the long-run predictive performance of the DICE model’s representation of technical change. Whereas official data for the United States only provide capital stock estimates from 1925 (see US Bureau of Economic Analysis Table 1.1: Current-Cost Net Stock of Fixed Assets and Consumer Durable Goods, www.bea.gov/), the Piketty–Zucman data contain annual estimates since 1870.

The specific data series we use for Yt,Kt, and Lt are all taken from the Piketty–Zucman data, available as annual time series from 1870 to 2010 for the United States. The data are available online at gabriel-zucman.eu/files/capitalisback/USA.xlsx. For Lt we follow DICE and use total population, without any adjustment for participation rates or other demographic factors. The data series we use for total population of the United States (1870–2010) is from column 17 of Table US1 in ref. 20. For national income, Yt, we use the series in column 3 of Table US1 (expressed in 2010 billions US$). As our measure of Kt we take national wealth estimates from column 9 of Table US6a. The figures in that table are expressed as a percentage of national income, which we convert to a 2010 dollar amount using the Yt series.

Discussion

Over the long run, economic growth is primarily determined by technical progress or the rate of TFP growth (31). Unfortunately, we do not have good predictive theories of how TFP evolves (23, 32). In the economic growth literature, TFP represents the part of growth that remains unexplained after accounting for other inputs and is generally calculated as a residual in growth regressions (see, e.g., ref. 18). The use of a dynamical model of TFP as an essential predictive hypothesis, as in DICE, is therefore slightly nonstandard.

Most climate economy models tend to treat TFP as an exogenous time series (e.g., ref. 14), calibrated to result in a plausible rate of growth over the long run—typically based on extrapolating recent trends. For example, in the original DICE model specification (see ref. 33) the starting value for TFP growth was based on a simple average of estimated TFP growth over a short historical window (1961–1970) before the model start date. This growth rate was then assumed to decline gradually (by half every six decades), roughly in line with projections made in a number of other studies, as surveyed in ref. 34. In more recent incarnations of DICE (see ref. 14) TFP growth was calibrated so as to achieve specified growth rates in consumption per capita.*

A difficulty with extrapolating recent trends in TFP to produce a projection for long-run growth relates to the instability of TFP growth rates over time. In our analysis there is evidence of a structural break in the historical series of TFP growth rates. A Quandt likelihood ratio (QLR) test for the presence of structural breaks rejects the null of stability at the 5% level (the QLR test statistic in 1930 is 4.94; see refs. 35 and 36 for a detailed explanation of the QLR test). These unpredictable structural breaks make long-run forecasting very challenging.

A long-run historical perspective also suggests that using recent growth data as representative of future trends may be a risky forecasting strategy. The mid to late 20th century seems to have been something of a unique period in the history of economic growth (20, 21). Growth in income per capita was virtually nonexistent before 1750. Since then, the frontier economies have experienced a remarkable period of growth, with a pronounced acceleration during the 20th century. For a variety of reasons—most notably a slow-down in the pace and real economic benefits of technical innovation—this remarkable growth might not be sustainable (21). More optimistic views hold that we might be on the cusp of the next wave of innovation, driven largely by advances in information technology (37).† From a modeling perspective, although it is feasible to represent these possibilities by endogenizing the growth process (38) and explicitly incorporating uncertainty into our representation of economic growth, any given structural model will be a speculative predictive tool, and thus our confidence in quantitative predictions from such models will likely be low.

Endogenous Growth Versions of the DICE Model

In this section we report the results of an attempt to perform a similar model confirmation exercise on a modification of the DICE model suggested by Dietz and Stern (27). These authors suggest two possible modifications to the way DICE models technological progress, both of which are designed to endogenize TFP growth. The two models considered aremodel 1:At=aKtb[S6]model 2:At=(1−δA)At−1+γ1(It)γ2.[S7]Here At is the value of TFP at time t, Kt is the capital stock, and It is the investment flow at time t (given by sYt in a constant savings rate model). In model 1 a,b>0 are parameters. In model 2, δA>0 is a depreciation rate on At, and γ1,γ2>0 are parameters. Both models are designed to capture the idea that knowledge is produced as a “spillover” from capital formation (model 1) or investment (model 2). Model 1 is a modified version of one of the original “first-generation” endogenous growth models (39). A key feature of this model is that if α+b>1 (where α≈0.3 is the capital share of production), the model exhibits increasing returns in the capital stock. Model 2 is the authors’ own model, designed to address the fact that At depreciates at the same rather high rate as capital in model 1. In the author’s calibration of model 2, δA=1%/y, whereas capital depreciates at a much higher rate of 10%/y.

We wished to perform a similar model confirmation exercise with these models. Model 2, favored by the authors, is, however, poorly suited to empirical tests. The reason is that because δA is small, the term At−(1−δA)At−1≈At−At−1, which can be negative (this occurs frequently in our data), whereas the term γ1(It)γ2 must be positive. The mismatch between the signs of these two terms leads to empirical parameter estimates that are highly unstable, and frequently complex.

Model 1 seems to be a better-behaved model, because both At and Kt are positive by definition. We have redone our analysis with this model, using training windows to estimate confidence intervals for the two parameters a,b, and then using the coupled state equations for At and Kt (i.e., Eqs. S1 and S6) to produce forecasts of TFP and GDP. We find that for early training windows the estimation and forecasting steps are well-behaved, although predicted GDP growth is downward-biased (Fig. S1). However, beyond about 1948, the model’s forecasts become highly unstable. The reason is that in later training windows empirical estimates of the upper bound of the 95% CI for parameter b in model 1 increase substantially (from about 0.3 to about 0.7–0.8). With these large exponents there is a substantial positive feedback effect in the model, with large values of At driving increased productivity, which increases capital formation, which in turn increases the formation of A. When combined with population growth this leads to superexponential growth. For some training windows the model predicts that GDP could multiply by 1010 over 50 y. Thus, we conclude that this model may also not be a well-behaved empirical tool. Indeed, it is well known that models of this kind need not admit stable equilibria (i.e., balanced growth paths) if population is not constant (ref. 18, p. 401). Dietz and Stern (27) calibrate the value of b to be 0.3; hence, they do not detect the instability we discuss. Their calibration falls outside the 95% confidence interval for b in our 50-y training windows from 1950 onward.

Fig. S1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. S1.

Results of an attempted confirmation exercise with model 1 (Eq. S6) for 50-y training and verification windows. Notice the instability in the model’s predictions that begins around 1948.

These exercises reinforce our main conceptual point about the value of “taking models to data” in confirmation exercises. Simply calibrating a model based on “stylized facts” or informed guesswork does not tell us whether the calibrated parameter values are in line with empirical observations, or whether the model’s predictions will be well-behaved when its parameters are estimated from data.

These endogenous growth models are certainly not the only possibilities. More sophisticated “second-generation” models of endogenous growth, including models of directed technical change, Schumpterian growth, human capital formation, expanding varieties, and so on, could be considered as possible alternatives. These models are interesting and important but would require an entirely different set of data for verification. In particular we would likely require much more detailed long-run data on the specifics of innovation processes. Such data are, to our knowledge, not currently available over comparably long time spans. We hope that our work will stimulate further empirical examination of these models on time scales relevant to climate policy applications.

Acknowledgments

This work was supported by the Centre for Climate Change Economics and Policy, funded by the Economic and Social Research Council, and the Grantham Foundation for the Protection of the Environment (A.M.) and the University College Cork Strategic Research Fund (T.K.J.M.).

Footnotes

  • ↵1To whom correspondence should be addressed. Email: a.millner{at}lse.ac.uk.
  • Author contributions: A.M. and T.K.J.M. designed research; A.M. and T.K.J.M. performed research; A.M. and T.K.J.M. analyzed data; and A.M. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • ↵*The concept of a structurally sound BC-IAM needs to be interpreted with care. We give a detailed definition that makes our usage of this term more precise in Supporting Information. In particular, our definition separates the empirical relationships in a BC-IAM (the focus of this article) from its ethical assumptions.

  • ↵†This view is confirmed by Nordhaus (12): “The major factor producing different climate outcomes in our uncertainty runs is differential technological change. In our estimates, the productivity uncertainty outweighs the uncertainties of the climate system and the damage function in determining the relationship between temperature change and consumption.” A global sensitivity analysis of the DICE model confirms that its SCC estimates are highly sensitive to the growth rate of TFP (13). A heuristic understanding of why policy recommendations are so sensitive to assumptions about TFP growth can be obtained by studying the social discount rate ρ(t). Under standard assumptions the change in social welfare that arises from a small change in consumption Δt that occurs t years in the future is given by Δte−ρ(t)t. Standard computations (14) show that in a deterministic setting ρ(t)=δ+ηg(t), where δ is the pure rate of social time preference, η is the elasticity of marginal utility, and g(t) is the average consumption growth rate between the present and time t. In most cases the term ηg(t) is the dominant contribution to ρ(t). Because consumption growth g(t) is driven by TFP growth in DICE, the present value of future climate damages is highly sensitive to TFP growth. For example, for δ=1%/y, η=2 and g(100)=1%/y an incremental climate damage of $100 that occurs 100 y from now will be valued at $100e−(0.01+2×0.01)100≈$5 in present value terms. For g(100)=2%/y, however, the same $100 damage would be worth $100e−(0.01+2×0.02)100≈$0.7. Thus, an increase in consumption growth from 1% to 2%/y reduces the current welfare cost of climate damages that occur in 100 y by a factor larger than 7.

  • ↵‡DICE assumes that the growth rate of TFP is an exponential function that slowly decays from an initial value to a smaller long-run value. The free parameters are the initial growth rate and the rate of decline of the growth rate. A large literature has developed endogenous growth models that relate the evolution of TFP to endogenous economic variables (see refs. 19 and 20 for a review of applications to climate economics). Because we wish to stay as close as possible to the methodology used by DICE, we do not investigate the empirical performance of these models here, but see comments below and Supporting Information, Endogenous Growth Versions of the DICE Model.

  • ↵§This finding is dependent on choices of welfare parameters, which in turn affect the social discount rate. All else equal, lower (higher) social discount rates make SCC values more (less) dependent on forecasts of the near future.

  • ↵¶Empirical tests of endogenous growth models suggest they are often more difficult to reconcile with historical data than simpler neoclassical alternatives (25, 26). Supporting Information, Endogenous Growth Versions of the DICE Model describes our attempt to perform a similar model confirmation exercise on two endogenous growth models recently suggested as alternatives to DICE’s model of TFP growth (27). We find that both models are poorly behaved, being either not specified in a manner amenable to empirical estimation or exhibiting instabilities that cause their predictions to be uncontrolled when their parameters are estimated from historical data.

  • ↵#Ref. 28 presents a further example of the benefits of hindcasting in a model of US energy intensity.

  • ↵*In DICE 2013 gA(2015)=7.9% per 5 y and δA=0.6% per 5 y. This specification leads to growth in consumption per capita of 1.89%/y from 2010 to 2100 and 1.07%/y from 2100 to 2200. See the 2013 DICE user manual (15) for further details.

  • ↵†The summer 2015 issue of the Journal of Economic Perspectives (Vol. 29, No. 3) contains an illuminating symposium on the possible effects of digital innovation on labor markets and growth.

  • This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1604121113/-/DCSupplemental.

References

  1. ↵
    1. Oreskes N,
    2. Shrader-Frechette K,
    3. Belitz K
    (1994) Verification, validation, and confirmation of numerical models in the Earth sciences. Science 263(5147):641–646.
    .
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Stocker T, et al.
    1. Flato G, et al.
    (2013) Evaluation of climate models. Climate Change 2013: The Physical Science Basis, ed Stocker T, et al. (Cambridge Univ Press, Cambridge, UK), Chap 9.
    .
  3. ↵
    1. Knutti R
    (2008) Should we believe model predictions of future climate change? Philos Trans A Math Phys Eng Sci 366(1885):4647–4664.
    .
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Edenhofer O, et al.
    1. Clarke L, et al.
    (2014) Assessing transformation pathways. Climate Change 2014: Mitigation of Climate Change, ed Edenhofer O, et al. (Cambridge Univ Press, Cambridge, UK), Chap 6.
    .
  5. ↵
    1. Stern N
    (2013) The structure of economic modeling of the potential impacts of climate change: Grafting gross underestimation of risk onto already narrow science models. J Econ Lit 51(3):838–859.
    .
    OpenUrlCrossRef
  6. ↵
    1. Pindyck R
    (2013) Climate change policy: What do the models tell us? J Econ Lit 51(3):860–872.
    .
    OpenUrlCrossRef
  7. ↵
    1. Millner A,
    2. Dietz S,
    3. Heal G
    (2013) Scientific ambiguity and climate policy. Environ Resour Econ 55(1):21–46.
    .
    OpenUrlCrossRef
  8. ↵
    1. Nordhaus W
    (2015) Climate clubs: Overcoming free-riding in international climate policy. Am Econ Rev 105(4):1339–1370.
    .
    OpenUrlCrossRef
  9. ↵
    1. Grubb M,
    2. Köhler J,
    3. Anderson D
    (2002) Induced technical change in energy and environmental modeling: Analytical approaches and policy implications. Annu Rev Energy Environ 27:271–308.
    .
    OpenUrlCrossRef
  10. ↵
    Interagency Working Group on the Social Cost of Carbon (2013) Technical support document: Technical update of the social cost of carbon for regulatory impact analysis—Under executive order 12866 (US Government, Washington, DC).
    .
  11. ↵
    Committee on Assessing Approaches to Updating the Social Cost of Carbon; Board on Environmental Change and Society (2016) Assessment of Approaches to Updating the Social Cost of Carbon: Phase 1 Report on a Near-Term Update (National Academies of Sciences, Engineering, and Medicine, Washington, DC), pp 1–74.
    .
  12. ↵
    1. Nordhaus WD
    (2011) Estimates of the social cost of carbon: Background and results from the RICE-2011 model. Working Paper 17540 (National Bureau of Economic Research, Cambridge, MA).
    .
  13. ↵
    1. Anderson B,
    2. Borgonovo E,
    3. Galeotti M,
    4. Roson R
    (2014) Uncertainty in climate change modeling: Can global sensitivity analysis be of help? Risk Anal 34(2):271–293.
    .
    OpenUrlCrossRefPubMed
  14. ↵
    1. Gollier C
    (2012) Pricing the Planet’s Future: The Economics of Discounting in an Uncertain World (Princeton Univ Press, Princeton).
    .
  15. ↵
    1. Nordhaus WD,
    2. Sztorc P
    (2013) DICE 2013: Introduction and User’s Manual (Yale Univ, New Haven, CT).
    .
  16. ↵
    1. Hope C
    (2006) The marginal impact of CO2 from PAGE2002: An integrated assessment model incorporating the IPCC’s five reasons for concern. Integrated Assess 6(1):19–56.
    .
    OpenUrl
  17. ↵
    1. Tol R
    (1997) On the optimal control of carbon dioxide emissions: An application of FUND. Environ Model Assess 2(3):151–163.
    .
    OpenUrlCrossRef
  18. ↵
    1. Golosov M,
    2. Hassler J,
    3. Krusell P,
    4. Tsyvinski A
    (2014) Optimal taxes on fossil fuel in general equilibrium. Econometrica 82(1):41–88.
    .
    OpenUrlCrossRef
  19. ↵
    1. Acemoglu D
    (2008) Introduction to Modern Economic Growth (Princeton Univ Press, Princeton).
    .
  20. ↵
    1. Gillingham K,
    2. Newell RG,
    3. Pizer WA
    (2008) Modeling endogenous technological change for climate policy analysis. Energy Econ 30(6):2734–2753.
    .
    OpenUrlCrossRef
  21. ↵
    1. Piketty T,
    2. Zucman G
    (2014) Capital is back: Wealth-income ratios in rich countries 1700–2010. Q J Econ 129:1255–1310.
    .
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Gordon RJ
    (2012) Is U.S. economic growth over? Faltering innovation confronts the six headwinds. Working Paper 18315 (National Bureau of Economic Research, Cambridge, MA).
    .
  23. ↵
    1. Iverson T,
    2. Denning S,
    3. Zahran S
    (2015) When the long run matters. Clim Change 129:57–72.
    .
    OpenUrlCrossRef
  24. ↵
    1. Prescott EC
    (1998) Needed: A theory of total factor productivity. Int Econ Rev 39:525–551.
    .
    OpenUrlCrossRef
  25. ↵
    1. Pack H
    (1994) Endogenous growth theory: Intellectual appeal and empirical shortcomings. J Econ Perspect 8(1):55–72.
    .
    OpenUrlPubMed
  26. ↵
    1. Jones CI
    (1995) Time series tests of endogenous growth models. Q J Econ 110:495–525.
    .
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Dietz S,
    2. Stern N
    (2015) Endogenous growth, convexity of damage and climate risk: How Nordhaus’ framework supports deep cuts in carbon emissions. Econ J 125(583):574–620.
    .
    OpenUrlCrossRef
  28. ↵
    1. Dowlatabadi H,
    2. Oravetz MA
    (2006) US long-term energy intensity: Backcast and projection. Energy Policy 34(17):3245–3256.
    .
    OpenUrlCrossRef
  29. ↵
    1. Revesz RL, et al.
    (2014) Global warming: Improve economic models of climate change. Nature 508(7495):173–175.
    .
    OpenUrlCrossRefPubMed
  30. ↵
    1. Heal G,
    2. Millner A
    (2014) Uncertainty and decision making in climate change economics. Rev Environ Econ Policy 8:120–137.
    .
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Nordhaus WD
    (2010) Economic aspects of global warming in a post-Copenhagen environment. Proc Natl Acad Sci USA 107(26):11721–11726.
    .
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Easterly W,
    2. Levine R
    (2001) What have we learned from a decade of empirical research on growth? It’s not factor accumulation: Stylized facts and growth models. World Bank Econ Rev 15:177–219.
    .
    OpenUrlAbstract
  33. ↵
    1. Durlauf SN,
    2. Aghion P
    1. Parente SL,
    2. Prescott EC
    (2005) A unified theory of the evolution of international income levels. Handbook of Economic Growth, eds Durlauf SN, Aghion P (North-Holland, Amsterdam), Vol 1, Part B, pp 1371–1416.
    .
    OpenUrlCrossRef
  34. ↵
    1. Nordhaus WD
    (1992) The DICE model: Background and structure of a dynamic integrated climate-economy model of the economics of global warming. Cowles Foundation Discussion Paper 1009 (Cowles Foundation for Research in Economics, Yale Univ, New Haven, CT).
    .
  35. ↵
    1. Nordhaus WD,
    2. Yohe GW
    (1983) Changing Climate: Report of the Carbon Dioxide Assessment Committee (National Academies, Washington, DC), pp 87–153.
    .
  36. ↵
    1. Quandt RE
    (1960) Tests of the hypothesis that a linear regression system obeys two separate regimes. J Am Stat Assoc 55:324–330.
    .
    OpenUrlCrossRef
  37. ↵
    1. Stock JH,
    2. Watson M
    (2011) Introduction to Econometrics (Pearson, Boston), 3rd Ed.
    .
  38. ↵
    1. Brynjolfsson E,
    2. McAfee A
    (2014) The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (Norton, New York).
    .
  39. ↵
    1. Acemoglu D,
    2. Aghion P,
    3. Bursztyn L,
    4. Hemous D
    (2012) The environment and directed technical change. Am Econ Rev 102(1):131–166.
    .
    OpenUrlCrossRefPubMed
    1. Romer P
    (1986) Increasing returns and long-run growth. J Polit Econ 94:1002–1037.
    .
    OpenUrlCrossRef
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Model confirmation in climate economics
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Model confirmation in climate economics
Antony Millner, Thomas K. J. McDermott
Proceedings of the National Academy of Sciences Aug 2016, 113 (31) 8675-8680; DOI: 10.1073/pnas.1604121113

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Model confirmation in climate economics
Antony Millner, Thomas K. J. McDermott
Proceedings of the National Academy of Sciences Aug 2016, 113 (31) 8675-8680; DOI: 10.1073/pnas.1604121113
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley

Article Classifications

  • Social Sciences
  • Sustainability Science
Proceedings of the National Academy of Sciences: 113 (31)
Table of Contents

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • An Out-of-Sample Test of a Model of Long-Run Economic Growth
    • Implications for the Development and Use of BC-IAMs in Quantitative Policy Applications
    • An Abstract Definition of Structural Soundness for IAMs
    • The Model
    • The Data
    • Discussion
    • Endogenous Growth Versions of the DICE Model
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Setting sun over a sun-baked dirt landscape
Core Concept: Popular integrated assessment climate policy models have key caveats
Better explicating the strengths and shortcomings of these models will help refine projections and improve transparency in the years ahead.
Image credit: Witsawat.S.
Model of the Amazon forest
News Feature: A sea in the Amazon
Did the Caribbean sweep into the western Amazon millions of years ago, shaping the region’s rich biodiversity?
Image credit: Tacio Cordeiro Bicudo (University of São Paulo, São Paulo, Brazil), Victor Sacek (University of São Paulo, São Paulo, Brazil), and Lucy Reading-Ikkanda (artist).
Syrian archaeological site
Journal Club: In Mesopotamia, early cities may have faltered before climate-driven collapse
Settlements 4,200 years ago may have suffered from overpopulation before drought and lower temperatures ultimately made them unsustainable.
Image credit: Andrea Ricci.
Steamboat Geyser eruption.
Eruption of Steamboat Geyser
Mara Reed and Michael Manga explore why Yellowstone's Steamboat Geyser resumed erupting in 2018.
Listen
Past PodcastsSubscribe
Birds nestling on tree branches
Parent–offspring conflict in songbird fledging
Some songbird parents might improve their own fitness by manipulating their offspring into leaving the nest early, at the cost of fledgling survival, a study finds.
Image credit: Gil Eckrich (photographer).

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Subscribers
  • Librarians
  • Press
  • Site Map
  • PNAS Updates
  • FAQs
  • Accessibility Statement
  • Rights & Permissions
  • About
  • Contact

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490