Economic impacts of tipping points in the climate system

Significance Tipping points in the climate system are one of the principal reasons for concern about climate change. Climate economists have only recently begun incorporating them in economic models. We synthesize this emerging literature and provide unified, geophysically realistic estimates of the economic impacts of eight climate tipping points with an emphasis on the social cost of carbon, a key policy input.


Contents
This third criterion is admittedly the most subjective to apply, which is reflected in an additional 4 articles that do contain a geophysical component, albeit still stylized in some key way (marked 'stylized/geophysical' in the table). Our selection criterion for geophysical realism is best illustrated by two examples -both from the same scholar -that together characterise the spectrum (see the simple schematic below). On the ad hoc end of the spectrum are arbitrary changes to aggregated damage functions in Integrated Assessment Models (IAMs), which map the increase in global mean temperature into welfare-equivalent losses in global GDP per capita. For example, in The Climate Casino, Nordhaus suggested including tipping points by assuming a "stylized tipping-point damage function", according to which damages increase sharply at warming of 3.5°C and become prohibitive beyond 4.5°C (2). In Nordhaus' own words: "These assumptions are at the outer limit of what seems plausible and have no solid basis in empirical estimates of damages" (p213) . The GIS module is calibrated on results from the underlying literature on ice-sheet dynamics, principally (4). Damages are calibrated on a detailed study of the relationship between sea level rise, coastal defence costs and the costs of coastal flooding and permanent inundation (5). The result is a more realistic, yet tractable climate-economy IAM. Figure 1 shows that economic studies into climate tipping points date back to at least the mid-1990s, with the first paper incorporating geophysical realism appearing around the turn of the millennium (28).

National consumption per capita
Utility and social welfare fcns. In the first stage, near-surface permafrost thaw is a linear function of warming relative to time zero: where PF extent (t) ≡ PF area (t)/PF area (0), i.e. PF extent (t) is the area of permafrost remaining at time t relative to time zero, ∆T AT is the global mean surface air temperature relative to pre-industrial, and β PF is a coefficient representing the sensitivity of permafrost thaw to temperature, which Kessler calibrated by regressing estimates of thaw on temperature from the literature. t = 0 in our model is the year 2010.
The amount of carbon in freshly thawed permafrost at time t, C thawedPF , is then the product of the total stock of carbon locked in the near-surface northern circumpolar permafrost region, C PF , and the area of permafrost freshly thawed: Once thawed, the principal way in which carbon is released to the atmosphere is microbial decomposition and this happens slowly. Some of the carbon is released as CO 2 and some as CH 4 . Kessler's model divides the stock of thawed carbon into a passive reservoir that releases no carbon and an active reservoir that decomposes exponentially and releases CO 2 and CH 4 in fixed proportion. Therefore cumulative CO 2 emissions to the atmosphere from thawed permafrost, CCum PF , are given by where propPassive is the proportion of thawed permafrost in the passive reservoir and τ is the e-folding time of permafrost decomposition in the active reservoir, which is multiple decades (see below). The fluxes of CO 2 and CH 4 are respectively given by where propCH 4 is the share of CH 4 emissions in total carbon emissions.
We can directly reproduce the permafrost carbon emissions estimated by (30)  propPassive, and τ , each parameter restricted to lie within physically plausible bounds. Table   2 reports the various parameter values. Figure 6 shows the fit to cumulative CO 2 emissions from (27) and (57). CH 4 emissions for these two papers are obtained simply by using the fitted parameters in combination with the fixed value of propCH 4 from (30).

Ocean methane hydrates
There have been two studies of the economic cost of destabilization of ocean methane clathrates/hydrates. The first is Whiteman et al. (53), who implemented what-if scenarios in PAGE09, releasing a pulse of CH 4 emissions of fixed size and duration into the model at a given point in time. These scenarios were based on the work of (58) where CH 4_OMH is the pre-specified total amount of methane released, e.g. 50Gt in the case of the main specification of (53), and ∆ OMH is the duration of the release, e.g. 10 years.
Applying this formalism to (59), CH 4_OMH /∆ OMH ∈ {0.2, 1.784, 7.8} and total CH 4 released from ocean CH 4 hydrates is bounded only by the product of CH 4_OMH /∆ OMH and the model horizon, i.e. the inequality constraint in Equations (6) and (7) does not bind. I OMH (t) is an indicator function taking a value of zero before the hazard event is triggered and one thereafter. In general, its transition function is where ε(t) is an i.i.d. random shock. That is, in each period the value of I OMH depends on its own value in the previous period, the current atmospheric temperature, and the random shock. Specifically, the probability transition matrix for I OMH (t) is where p OMH (t) is the probability that the CH 4 emissions pulse is triggered in year t. This is given by where b OMH is the hazard rate.
In order to calibrate the hazard rate, we use the study of Archer et al. ( to be released. Calibrating the hazard rate on (62) means that we re-interpret (53) in the context of the global reservoir of CH 4 hydrates on continental shelves and slopes, rather than the reservoir of CH 4 locked in subsea permafrost in the Arctic region. This is justified, since other research suggests a large release of CH 4 from the Arctic subsea permafrost within the next two centuries is extremely unlikely (63). 3 According to (62), cumulative CH 4 released in very long-run equilibrium upon 1 • C warming varies hugely from about 10GtCH 4 to 541GtCH 4 for critical bubble fractions of 10% and 1% respectively. 4 Upon 3 • C warming the range increases to about 32-1084Gt. Moreover (62) report that there is next to no empirical evidence on the critical bubble fraction. In the absence of such evidence, we try three alternative specifications of the probability distribution of equilibrium cumulative CH 4 release as a function of the critical bubble fraction ( Table   3). The uniform distribution is an application of the principle of insufficient reason. The triangular and especially the beta distribution are more conservative in the sense of assigning more probability mass to higher critical bubble fractions and in turn lower equilibrium CH 4 releases.
Irrespective of the critical bubble fraction, CH 4 released from melting ocean hydrates is thought to take a very long time to reach the atmosphere, much longer than permafrost carbon. Therefore, in order to convert the equilibrium CH 4 release into a transient release, we conservatively assume a release rate of just 0.2%, implying an e-folding time of 500 years and approximately 3,000 years for equilibrium to be reached (also see 62).
To give an example of how we then calibrate the hazard rate b OMH , we use a middle 3 Indeed, the scenarios in (53) were criticised at the time of publication for being unrealistic in the context of Arctic subsea processes; see Nature volume 300, p529. 4 Based on digitising Figure 7 in their paper. Change (IPCC), fed into our climate module excluding tipping points, GMST in 2035 will be about 1.6 • C above pre-industrial. Using the approach just described to represent the modelling results of (62), we estimate a 24.4% probability of a 50GtCH 4 cumulative release by 2035 assuming a beta distribution. This gives b OMH = 0.118. We follow the same procedure to assign hazard rates using the uniform and triangular distributions, and apply it to different durations of emissions pulse investigated by (53), as well as the scenarios in (59). Table 3 reports all the estimated hazard rates. We prefer the beta distributions except in sensitivity analysis, as they are more conservative.

Amazon rainforest dieback
Dieback of the Amazon rainforest was included in the study of Cai et al. (60) as a carbon-cycle feedback. This is the study we incorporate in our analysis. Naturally a wide range of other economically important consequences of Amazon rainforest dieback are thereby excluded, including those on biodiversity and ecosystems. These have yet to be incorporated in any economic modelling study, to the best of our knowledge.
As mentioned above, (60) model tipping points through survival analysis. In the case of Amazon rainforest dieback, 50GtC is released over 50 years upon triggering the hazard event.
Using parallel formalism to ocean methane hydrates, CO 2 emissions from Amazon rainforest dieback at time t, CO 2_AMAZ (t), are given by where CO 2_AMAZ = 50GtC and ∆ AMAZ = 50 years. The probability of the indicator function I AMAZ (t) transitioning from zero to one is where the hazard rate b AMAZ = 0.00163 in (60) is taken from the expert elicitation study of (64).

Greenland Ice Sheet
Our temperature and the volume of the GIS. Assuming this is reversible, (3) specified where ∆T * GIS (t) is defined as the atmospheric temperature increase relative to initial temperature that is associated with a particular degree of melting of the GIS in equilibrium and V GIS (t) ∈ [0, 1] is the volume of the GIS expressed as a fraction of the initial volume.
Nordhaus (3) showed that the change in specification makes little difference on the optimal emissions path, which involves relatively limited warming, but can make a difference on high-emissions scenarios.
The difference equation for V GIS (t), i.e. the GIS melt rate, can be written as where β GIS = −0.0000106 based on regression analysis of estimates from (4). 8 The basic idea embodied in Eq. (16) is that melting of the GIS depends on the difference between the 6 (3) also reports runs in which T * 0.5 and finds the results are very similar. 7 Noting that the melt rate coefficient β GIS below also needs to be recalibrated to -0.0000088 to fit (4). actual atmospheric temperature and the equilibrium GIS temperature, as well as the volume of the GIS at the time.
Sea level rises linearly in response to GIS melt, where SLR GIS is defined relative to the year 2000. This implies that complete disintegration of the GIS would increase global mean sea level by 7 metres.

West Antarctic Ice Sheet
Disintegration of the West Antarctic Ice Sheet (WAIS) was modelled by Diaz and Keller (61).

Like Nordhaus (3), they built a simple model of WAIS melting for incorporation in DICE.
Unlike Nordhaus, who focused on a best estimate around which selected sensitivity analysis was performed, (61) used the framework of survival analysis. In particular, global mean sea level rise from WAIS melting, SLR WAIS (t), is given by where r WAIS is an exogenous parameter determining the annual contribution to global mean sea level upon triggering disintegration of the ice sheet, assumed lognormally distributed with a mean of 3.3mm/yr and a standard deviation of 1.65mm/yr. This implies it takes on average 1000 years for the WAIS to disintegrate completely after its tipping point is crossed.
is the indicator function for WAIS disintegration, whose probability of transitioning from zero to one, conditional on having been zero in year t − 1, is The hazard rate b WAIS = 0.0043 is also based on the expert elicitation exercise of (64).

Arctic sea-ice loss/surface albedo feedback
Changes in global ice and snow cover also affect the surface albedo feedback (SAF), increasing net radiative forcing. While these effects are implicitly captured in the equilibrium climate sensitivity (ECS) parameter in simple climate models, i.e. the steady-state increase in temperature in response to a doubling of the atmospheric CO 2 concentration, doing so assumes that the marginal forcing from an increase in temperature is constant across temperatures.
However, as the area of ice and snow diminishes, the marginal response for further increases in temperature decreases. This SAF dynamic has been modelled by Yumashev et al. (57) using PAGE-ICE and we replicate their model here.
(57) use a quadratic fit of the SAF observed across the CMIP5 models, shown in the top panel of Figure 7. This falling SAF curve describes the weakening feedback loop between changes in temperature and changes in albedo. For low levels of warming, the SAF is greater than the constant value represented in the ECS; as sea-ice and land snow diminish, the feedback effect drops. When sea ice and land snow are absent, the SAF effect is zero. The total radiative forcing due to albedo, however, always increases with temperature, and reaches its maximum when sea ice and land snow are absent.
Total SAF forcing is the integral of the SAF feedback effect across the change in temperature, reaching 2.67 Wm −2 at warming of 10 • C. The ECS follows a non-linear curve calculated as a function of the ECS in the last period, and accounting for the different level of feedback compared to a constant level. As a consequence, adding the SAF to the base climate model can result in lower warming eventually.
The calculations for the SAF correction are shown below. The principle of the SAF model is to correct temperatures calculated under the process used in PAGE-ICE, so we first reproduce this temperature calculation. Global PAGE-ICE atmospheric temperature is calculated as where FRT is the warming half-life, from a triangular distribution from 10 to 55 with mode of 20 The surface albedo feedback is then calculated using a quadratic approximation, where SAF decreases more rapidly as temperature increases. The equations are described as an integral over this quadratic: δ is the nonlinearity of SAF, drawn from a symmetric triangular distribution from -1 to 1 The adjustment to the SAF forcing is given by a two-segment correction ψ is the integration constant for SAF forcing at the segment switch point α is the linear SAF segment mean σ is the linear SAF segment standard deviation Also using SAF (t), the adjusted ECS and FRT values are calculated as where SAF is the constant approximation to the SAF (0.34959 W/m 2 /C).
Then ∆T AT M −P AGE2 (t), the adjusted temperature time-series, is calculated identically to ∆T AT M −P AGE1 (t), but using ECS , FRT , and with the additional forcing ∆FSAF(t). The temperature adjustment produced by the SAF model, is then added to the main temperature in the model.

Slowdown of the Atlantic Meridional Overturning Circulation
Weakening of the Atlantic Meridional Overturning Circulation (AMOC) or thermohaline circulation, 9 whether partial or full, has inspired a number of numerical modelling studies in climate economics (6; 66; 11; 59; 29; 32; 36; 67; 48). The majority of these take a stylised approach. Of those aiming for realism, we choose to incorporate the results of Anthoff et al. (6) in our model, because of their unique focus on the effects of AMOC slowdown at the national level. This is arguably central to the economic evaluation of AMOC slowdown, because its physical effects would vary significantly across the world, from a reduction in regional temperature of several degrees, all else being equal, to an increase in regional temperature of a few tenths of a degree (see 6, fig. 1). The basic logic is that the ocean circulation redistributes heat, rather than creating or destroying it, and countries vary in their exposure to this heat redistribution, as well as the effects of global warming more broadly, depending on their physical location. AMOC slowdown is expected to have physical effects other than temperature change, for instance effects on precipitation and regional sea levels (68), but these have yet to be incorporated in economic studies. The national temperature delta arising from AMOC slowdown is hence given by where ∆T AT_AMOC (i) is the permanent difference in national annual average temperature as a result of AMOC slowdown in country i. The data points corresponding to ∆T AT_AMOC (i) were kindly provided by Anthoff and colleagues for all countries they covered. ∆ AMOC is the time taken for AMOC slowdown to phase in, i.e. 35 years. I AMOC (t) is the indicator function, whose transition probability from zero to one is conditional on I AMOC (t − 1) = 0.
To calibrate the hazard rate for each of the four scenarios in (6), we compile likelihoods as a function of global mean temperature increase for distinct AMOC shutdown events ranging from a weakening of 11% to a full shutdown. We obtain these from the IPCC Fifth Assessment Report (69), its Special Report on Global Warming of 1.5 • C (70), and (71). Given the limited measurements of AMOC intensity, these numbers reflect a combination of modelbased estimates and expert judgement. We proceed in two steps: (i) we take the convex combination of the AMOC shutdown events from the literature that most closely resembles the what-if scenario at hand. To obtain a hazard rate b AMOC , we then (ii) calibrate Equation (22) by minimizing the sum of squared differences to the likelihoods obtained in step (i). We estimate b AMOC = 1.6 for a 7% slowdown, 0.611 for a 24% slowdown, 0.54 for 27% and 0.135 for 67%.

Weakening of the Indian Summer Monsoon
The first integrated assessment of the Indian Summer Monsoon (ISM) and its response to climate change has recently been carried out by Belaia (10). This is based on coupling a version of Nordhaus' regionally disaggregated RICE IAM (72) to a model of the ISM (73).
The ISM is driven by greater heating of the land surface relative to the ocean in summer, which creates a pressure gradient that drives moist ocean air over the Indian subcontinent, where it rises and condenses. However, ISM rainfall displays important year-to-year variation and the ISM has the potential to abruptly change regime from wet to dry and vice versa.
Schewe and Levermann's model generates these dynamics by incorporating reduced-form representations of two competing feedback processes. The first is the so-called moisture advection feedback, a positive feedback whereby monsoon rains release latent heat, which strengthens the monsoon circulation and brings more rainfall in turn. The second is the drysubsidence effect, a negative feedback whereby high pressure reduces rainfall, the decreased rainfall leads to less latent heat being released, which in turn sustains the dry phase. High pressure also deflects winds away from the monsoon region. In Belaia's model (10), rainfall depends on both climate change, through multiple channels, and regional emissions of sulphur dioxide, which reflect incoming solar radiation, reduce heating over the Indian subcontinent and weaken the ISM.
The key output of the ISM model that feeds into damages to India (see below) is average rainfall over the Indian subcontinent over the summer monsoon season: where P (d, t) is rainfall on day d of year t and there are 136 days in each monsoon season. 10 Each day is either wet or dry, depending on where P r(d, t) = U (0, 1), capturing random variation in day-to-day weather. There is no rainfall on a dry day, whereas rainfall on a wet day is an increasing function of atmospheric temperature, since a warmer atmosphere can hold more water: The initial value of P wet is 9mm per day and it increases by 0.42mm/day/ • C of global warming.
The probability of a wet day during the first δ days of the season -the onset -is where p m = 0.82 is the maximum probability of a wet day. 11 The formulation in Eq. (26) makes rainfall during the onset of the season a function of albedo A pl (t), in particular its relation to a critical albedo value A pl,crit (t). If the actual albedo exceeds the critical value, 10 For computational reasons, we use a four-day time step, so P (d, t) changes at most once every four days and there are 136 days in the season, compared with 135 in (10). 11 By bounding the probability of a wet day during the onset of the monsoon season, the system does not become irrevocably locked into either a wet or dry state. the probability of a wet day is at its minimum. The critical albedo value is increasing in the atmospheric concentration of CO 2 , 3 i=0 S i (t) + S gives the atmospheric CO 2 concentration and its derivation is explained in the following section. The actual albedo is given by where T pl is the fraction of light transmitted by the aerosol layer, A s is the present value of the surface albedo, β pl and α pl,3 are coefficients representing the backscatter fraction and mass scattering efficiency respectively and B SO4 (t) is the regional sulphate burden over the Indian peninsula. This last quantity depends on SO 2 emissions in the region: Assuming the actual planetary albedo does not exceed the critical value, the probability of a wet day during the first δ days of the season is The probability of a wet day after the first δ days of the season is where δ = 16 days. 12 The probability of a wet day depends positively on how wet the previous δ days were, a representation of the moisture advection and dry-subsidence feedbacks.

Tipping point interactions
Tipping points can interact with each other in multiple ways (60; 64). Some of these interactions are hardwired into the structure of our model. For example, the PCF increases GMST, which affects all seven remaining tipping points in our study, because all of them depend on temperature. However, the structure of our model can only capture a limited subset of all the possible interactions between tipping points. To increase the number of interactions, we use the expert elicitation study of Kriegler et al. (64), which attempted to quantify how the triggering of one tipping point can cause the hazard rates of other tipping points to change, 12 With a four-day time step, we set the memory period δ = 16 days, rather than 17 days as in (10).
with a focus on mechanisms other than temperature.
We apply a hierarchical Bayesian analysis to obtain best estimates of the hazard rate changes provided by the experts in (64). The hazard rate changes -the interactionsare represented by a range for expert i from lower bound u i to upper bound n i . Each change/interaction is a multiplier on the base hazard rate, so a value of 1 means no change.
We posit a true, expert-specific hazard rate change, θ i , and further assume that these true values are drawn from a normal distribution with unknown mean and variance. This allows the expert opinions to be partially pooled to inform the hyperparameters of the normal distribution: We treat cases where experts were uncertain about the lower bound of the hazard rate change as having a lower bound of 0, and cases where they were uncertain about the upper bound as an upper bound of 10. Figure 8 presents the results.  Table 4 provides a matrix.  (64). The absence of parenthesis indicates the interaction is hardwired in the model structure. Zeros indicate an interaction that is included, but that has a statistical zero effect according to (64)

Climate module 2.2.1 Emissions
The principal inputs to the climate model are global emissions of CO 2 and CH 4 . There are two sources of these. The first is anthropogenic emissions from burning fossil fuels, from industrial processes, land use and land-use change, and waste disposal. Anthropogenic emissions are exogenous and sourced from the RCP database (74), 13 giving four emissions scenarios estimated in order to match prescribed paths for radiative forcing. 14 Estimating the SCC requires a time horizon of several hundred years be considered, due to the long atmospheric lifetime of CO 2 . Therefore we use the RCPs extended to 2300 by (76). Other anthropogenic 13 http://www.iiasa.ac.at/web-apps/tnt/RcpDb 14 RCP2.6 peaks at~3 Wm −2 before 2100 and then declines; RCP4.5 reaches~4.5 Wm −2 at stabilisation after 2100; RCP6 reaches~6 Wm −2 at stabilisation after 2100; RCP8.5/baseline exceeds 8.5 Wm −2 in 2100. and natural sources of radiative forcing, both positive and negative, are aggregated into an exogenous residual radiative forcing series. 15 Projections of these sources are also taken from the extended RCP database. The second source of emissions is the carbon-cycle feedbacks described in the previous section, i.e. permafrost melting, dissociation of ocean methane hydrates, and Amazon rainforest dieback.

CO 2 and CH 4 cycles
CO 2 emissions are fed into the FAIR model of the carbon cycle (77). FAIR builds on the model of (78), which was designed to emulate a diverse set of carbon-cycle models of different complexity for an inter-comparison project. FAIR adds to this a reduced-form representation of positive carbon-cycle feedbacks, whereby the rate of CO 2 uptake by ocean and terrestrial carbon sinks is decreasing in cumulative CO 2 uptake by those sinks, and in temperature.
The most important of these feedbacks is saturation of the ocean carbon sink.
In the model, the atmospheric stock of carbon is partitioned into four boxes, each of which decays at a different rate: where S i (t) is the stock of carbon in box i and CO 2 (t) = CO 2_EX (t) + CO 2_PF (t) + CO 2_AMAZ (t). CO 2_EX (t) stands for exogenous, anthropogenic emissions from the RCPs.
The coefficients a i determine the fraction of emissions entering each box; 3 i=0 a i = 1. To emulate the behaviour of the representative carbon-cycle model in (78), i.e. the multi-model average, it so happens that the allocation between the four boxes is of the order of 25% each (more precisely, 22-28%). The coefficients δ i are the decay rates, which range from approximately zero, loosely corresponding to the time taken for CO 2 to be sequestered by geological sedimentation, to around 23%, corresponding to rapid removal of atmospheric CO 2 by the biosphere and upper oceans (77).
The coefficients α(t) represent the positive carbon-cycle feedbacks, slowing down the rate of removal of atmospheric CO 2 from each box. In turn, α(t) is a function of the integrated CO 2 impulse response function in FAIR over 100 years (iIRF 100 ). In other words, iIRF 100 is the average airborne fraction of the CO 2 impulse over a period of time (in this case 100 years), multiplied by that period of time. In FAIR, iIRF 100 is modelled in reduced form as a linear function of temperature and cumulative emissions absorbed by carbon sinks: where r pre = 34.4 years is the estimated pre-industrial value of iIRF 100 , 16 r T = 4.165 years/ • C, r C = 0.019 years/GtC, and S is the pre-industrial concentration of atmospheric CO 2 , 278ppm. iIRF 100 can take a maximum value of 96.6, otherwise the model becomes unstable (77). 17 The relationship between α(t) and iIRF 100 (t) has no analytical solution. We estimate α(t) by fitting an exponential function, 18 which gives where χ 1 = 0.0107 and χ 2 = 0.0866. CH 4 has a much shorter atmospheric residence time than CO 2 and its decay can be adequately represented by a simple one-box model: 16 We use a constant of 34.4 instead of 32.4 as per the original FAIR model (77) in order to obtain a better fit under current and future conditions. A value of 32.4 better fits decay under pre-industrial conditions (77, table 2). 17 If iIRF 100 > 100, the atmospheric concentration of CO 2 grows without bound in response to an emissions impulse. 18 With thanks to Frank Venmans.

Radiative forcing and temperature
We specify Arrhenius' logarithmic relationship between radiative forcing and atmospheric CO 2 : where F 2×CO 2 is the radiative forcing resulting from a doubling of atmospheric CO 2 .
For radiative forcing from atmospheric CH 4 , we use IPCC's simplified expression (79): There is significant overlap between some of the infrared absorption bands of CH 4 and nitrous oxide (80), which means that radiative forcing from atmospheric CH 4 is not independent of the atmospheric concentration of nitrous oxide. That is why atmospheric nitrous oxide appears in (38) Overall radiative forcing is the sum of the contributions from atmospheric CO 2 and CH 4 , as well as the residual forcing from other greenhouse gases and drivers: From forcing, the increase in GMST is governed by a model comprising two heat boxes, one for the atmosphere, land surface and upper oceans ∆T AT and one for the lower/deep oceans ∆T LO . This is the same model structure used by Nordhaus in DICE and it has separately been shown to emulate well the temperature response to emissions of a wide range of General Circulation Models (81), albeit with a different parameterisation than Nordhaus.
The equations of motion for ∆T AT and ∆T LO are where C U P and C LO are the effective heat capacities of the upper and lower oceans per unit area, respectively, ς is the equilibrium climate sensitivity or ECS and γ is a coefficient of heat exchange between the upper and lower oceans. shows that they are in close agreement.

Sea level rise
Sea level rise comprises a contribution from thermal expansion and melt from glaciers and small ice caps, SLR THERM (t), as well as a contribution from disintegration of the GIS and WAIS: Sea level rise is defined relative to the year 2000 and SLR(0) = 0.04m (82). To model the contribution from thermal expansion and melt from glaciers and small ice caps, we follow (61) in specifying SLR as a linear function of warming: where r TE = 0.00078 and r GSIC = 0.00081 parameterise the rates of SLR from thermal expansion and melt from glaciers and small ice caps respectively. Sea level rise from thermal expansion is parameterised such that 1 • C warming results in a very long-term equilibrium increase of 0.5m (i.e. over the course of approximately 1000 years).

National temperature
We want to implement climate damages at the national level, both to make best use of available empirical damage estimates, which are now at the national level, and to accurately model the impacts of AMOC slowdown in particular, which vary from country to country.
We use the damage estimates of (83). In order to use these, we need to convert the increase in GMST relative to pre-industrial into the level of national mean surface temperature. We do this by means of statistical downscaling, before subsequently adding the effect on the level of national mean surface temperature of AMOC slowdown. For country i, statistical downscaling involves estimating the ratio of national mean surface temperature to global mean surface temperature λ(i, t) using the following equation: where T AT (0) is the level of global mean surface temperature at t = 0, i.e. 2010, and T AT (pre) is an estimate of the pre-industrial GMST. The estimating equation therefore follows the logic that each country's mean temperature converges to a long-term equilibrium difference with respect to the global mean. The country-level coefficients α(i) and β(i) are estimated by OLS according to the expression above, pooling data for RCP 4.5 and 8.5.
National mean surface temperature is estimated by applying the coefficients λ(i, t) to the level of GMST at time t and adding the change in national mean surface temperature due to AMOC slowdown:

Damages and national income per capita
Income growth depends on exogenous drivers, as well as damages from changing temperatures and from sea level rise (and from the summer monsoon in India, only). Post-damage income per capita in country i, y(i, t), grows according to where g EX (i, t) is an exogenous, country-and time-specific growth rate that is taken from the Shared Socio-Economic Pathway (SSP) database (84). 19 The SSPs were designed as a flexible accompaniment to the RCP emissions scenarios. 20 The SSP scenarios are only defined until 2100. To extend these scenarios until 2300, we follow a procedure described in Section 2.4.1.
D TEMP (i, t) are temperature damages and D SLR (i, t) are SLR damages.
The level of income per capita in the previous year, on which damages in the current year work, where y EX (i, t − 1) is counterfactual income per capita, also taken from the SSP database, y(i, t − 1) is the actual post-damage income per capita experienced, and ϕ parameterises the weight given to each. This specification enables us to explore two different interpretations of the empirical evidence on temperature damages. The first interpretation is that tempera- impact the growth rate of income by directly impacting the accumulation of factors of production and/or by impacting productivity growth (85). Such 'growth' damages correspond with ϕ = 0. Our main specification is an intermediate value of ϕ = 0.5.
Temperature damages themselves are given by where the coefficients β 1i and β 2i are calibrated on the empirical results of (83). The procedure is described in Section 2.4.2.
SLR damages are given by where θ(i) parameterises the cost to country i per unit SLR. Like (3), we obtain SLR damages from Diaz's CIAM model (5), but we preserve the country resolution. We run CIAM to obtain estimates of national coastal damage/adaptation costs as a function of SLR in two scenarios, (i) no adaptation and (ii) optimal adaptation. We treat each country's adaptation decisions as uncertain and obtain a symmetrical triangular distribution for each θ(i) with a minimum corresponding to costs in (i) and a maximum corresponding to costs in (ii). We use costs/SLR in 2050 for the calibration, a simple approach facilitated by the fact that the relationship between the two is approximately linear over the 21st century (5).
In India, there is an additional damage multiplier D ISM (IN D, t), so that national income per capita is given by Following (10), the ISM damage multiplier is given by This structure implies that only extremely wet monsoon seasons and extremely dry monsoon seasons affect income in India, with the measure of precipitation being average rainfall for the monsoon season P (t) from Eq. (23). The drought threshold P drought = 2.8667mm/day, while the equivalent flood threshold P flood = 7.6667mm/day. Drought-related damages D drought = 3.5% of GDP, while flood-related damages D flood = 0.85%. All these parameter values are taken from (10).

Utility and welfare
Post-damage national income per capita is first converted into consumption per capita using a country-specific but time-invariant savings rate, where the country savings rates s(i) are calibrated on observed national savings rates averaged over the period 2005-2015, using World Bank data. Savings data are missing for many countries, in which case we impute the global average, also obtained from the World Bank.
This specification assumes savings are exogenous and do not respond to changing income prospects. Fully endogenous savings are computationally infeasible in a model with this much complexity and detail. The limitations of assuming constant/exogenous savings have been discussed in the literature, e.g. (86). Small to moderate climate damages do not appear to shift savings rates measurably. Large damages can do so, however. In Section 3, we report a sensitivity analysis, in which we shift all countries' savings rates up and down by a fixed amount.
Consumption is converted into utility using a standard, constant-elasticity-of-substitution representation, where η is the elasticity of marginal utility of consumption.
To compute overall welfare, we specify a discounted classical/total utilitarian social welfare functional. We begin by calculating welfare for each country i: where ρ is the utility discount rate, a.k.a. the pure rate of time preference. Discounted, population-adjusted current period utility is then summed over the whole modelling horizon to obtain total welfare. Population data are exogenous and taken from the SSP scenarios.
Global welfare follows naturally as the sum of welfare across all countries i:

Computing the social cost of carbon
The social cost of carbon along a particular scenario of emissions, income and population is the difference in welfare caused by a marginal emission of CO 2 , normalised by the marginal welfare value of a unit of consumption in the base year: To calculate the numerator, we run the model twice with identical assumptions, the second time with an additional pulse of emissions. Let θ m represent a vector of parameter values from the model as described in the preceding sections. These are in most cases random draws from a distribution, including individual tipping event realisations. Then we calculate where ∆ E is the emissions pulse. We focus on an emissions pulse in 2020.
The denominator of (57), ∂W/∂c(t), depends on the consumption level of the normalising agent. We define this as the global average individual, i.e. global mean consumption per capita:c . (59) Note that this is also uncertain and depends on the vector of random parameters. Differentiating the utility function, we then have We focus on a base year of 2020.
We then calculate the negative of the ratio of equations (58) and (60)

Non-market damages
The above damages from temperature, SLR and the ISM can be regarded as 'market' damages. They do not include estimates of the welfare cost of climate change outside markets, for example damages to ecosystems that can be priced at people's willingness to pay (WTP) to preserve those ecosystems' existence. 'Non-market' damages are more uncertain than their market counterparts, but in many IAMs they occupy a substantial share of total welfare damages from climate change (e.g. 88), so we add an estimate of them as a sensitivity check.
We   Figure 9: Willingness to pay to avoid 1.5, 2.5, and 4 C, as a function of income, as described by the MERGE model. We calculate this WTP measure at a national level. The non-market damage multiplier, or economic loss function, is This is a hockey-stick function embodying the assumption that non-market damages can increase rapidly as temperatures become more extreme. ∆T cat is a catastrophic warming parameter set to 17.68 • C, which people are assumed to be willing to avoid at any cost 22 .
h(i, t) is the hockey-stick parameter, which depends on country income per capita: As income increases above $25k/capita, the WTP to avoid 2.5 C warming increases from 2%.
The non-market damage multiplier is applied to country-level utility: for utility function u(·) as specified above.

Supporting analysis 2.4.1 Extending the SSP scenarios beyond 2100
To estimate post-2100 income and population along the SSP scenarios, we fit a model to the available pre-2100 SSP scenario data and use the fitted model to extrapolate. The same model is applied to both income and population and is defined in terms of growth rates. The model postulates that changes in pre-2100 income and population growth rates are explained by a rate of convergence and a rate of decay.
The model is as follows: where δ is the rate of convergence, β is the decay rate and Below, we write this as Growth(·, t − 1) · w, where w is the vector of global population shares for each country.
SSP data are not available in every year, so fitting Eq. (63) requires a model with dynamics. We use a two-step approach, fitting the model using Stan, a computational Bayes system. The first step uses the available data directly, fitting where s is a time step, ∆t is the number of years between time steps, and country i has uncertainty σ i . We apply a prior that both β and δ are between 0 and 0.5.
Next, we fit the full model, using the results of the simplified model to improve the Bayesian model convergence. In this case, for a given Markov chain Monte Carlo draw of β and δ, we calculate the entire time series: starting with Growth(i, 2015) as reported in the SSP dataset.
The probability evaluation is over both the performance of the fit and the priors: where µ is the mean estimate of the corresponding parameter and σ is the standard deviation across its uncertainty. The prior for σ i is defined as a log-normal, centered on the mean of the estimates of log σ i . The estimates for each SSP are shown in Table 5.

Calibration of country-specific damage functions
We develop damage functions based on the econometric estimates of Burke et al. (83) (pooled global estimates without lags). Since the independent variable used in the estimation process of (83) is annual average temperature, a few steps are necessary to develop functions indexed to climatic average temperature, as produced by our model. These steps follow the basic procedure described in (90): 1. Using the replication materials provided for (83), estimate their standard model coefficients, along with their associated variance-covariance matrix.
2. Generate annual average temperatures for each country, as projected by a range of bias-corrected, statistically-downscaled GCMs. Temperature averages are population weighted, according to gridded population data from (91). We use the set of downscaled models from (92), supplemented by surrogate pattern-scaled models representing the tails of the GMST probability distribution as described in (90). 3. Estimate growth impacts for each country, year, GCM, and RCP. We take 3000 random Monte Carlo draws from the uncertainty in the coefficients, using a multivariate normal distribution with the variance-covariance matrix estimated above. The growth impacts are reported as changes in the log of country-specific GDP per capita, and we do not accumulate growth impacts over time.

Compute the difference between the estimated, un-normalized growth rate in each
year and the average of the estimated growth rates from 1981-2000. We call this ∆ log y(i, r, t), where r stands for the RCP.

Calculate the average changes in growth rates across all GCMs and Monte Carlo runs,
for each RCP, year and country, and the standard deviation across growth rates. We weight GCMs as done in (90).
6. Calculate the climatic temperature T AT (i, r, t) as the weighted average across all GCMs of its area-weighted average temperature. The GCMs are weighted as above.
7. For each country, pool the results for all years and RCPs. Use these data to estimate the following regression model: where T AT (i, r, 1990) is the average climatic temperature from 1981 to 2000. The regression observations are weighed by inverse variance.   The projections of total SLR from all sources -including thermal expansion, melting of glaciers and small ice sheets, and melting of the GIS and WAIS -are similar. Our model projects 0.009m less SLR in 2020, but 0.06m more in 2100. The difference is well within the 67% confidence interval (93, Figure 13.11). Both the GIS model of (3) and the WAIS model of (61) project lower SLR contributions than their median IPCC counterparts. The difference is larger for the GIS. The extra SLR projected towards the end of the century in our model compared with the median IPCC model is therefore due to thermal expansion and melting of glaciers and small ice sheets.  Table 6 and Figure 13 report the change in the SCC due to the permafrost carbon feedback across different scenarios reported in the literature. The central estimates of the three published studies are in relatively close agreement about the percentage increase in the expected SCC, ranging from 6.3% in (57) to 12.4% in (30). In the stochastic implementation of Kessler's (30) model, the distribution of percentage increases in the SCC spreads, but the change in the expected SCC is not much higher than in Kessler's deterministic implementation, at 13.5%.  Figure 13: The percentage change in the SCC due to the permafrost carbon feedback. Boxes show median and interquartile range; whiskers show 95% confidence interval; crosses mark the average change (0.1% trimmed); triangles mark the 0.5 percentile; squares mark the 99.5 percentile. Y-axis is truncated. Emissions and GDP/population growth are from RCP4.5-SSP2. Monte Carlo sample size is 10,000. Table 7 and Figure 14 report the change in the SCC due to dissociation of ocean methane hydrates across different scenarios reported in the literature. The main scenario explored in (53) and the three scenarios explored in (59) involve very different cumulative CH 4 releases, which is reflected in a wide range of values of the expected SCC, ranging from 4.1% higher than without the tipping point, to 49.2% higher. Conversely the increase in the SCC in the (53) scenario (50GtCH 4 cumulative release) appears very robust to the specification of the hazard rate, as well as the duration of the 50GtCH 4 release (see Section 2.1.2). In particular, although the hazard rate is much higher under the uniform distribution, the percentage increase in the SCC is only fractionally higher. This reflects the fact that the 50GtCH 4 release is relatively likely before 2200 under all specifications and the difference in the expected timing of the release due to differences in the hazard rate has a relatively small effect on the SCC, as do differences in the duration of the release between 10 and 30 years (default specification is 20 years). This is consistent with the sensitivity analysis performed by (53) using the PAGE2002 IAM.  Figure 14: The percentage change in the SCC due to dissociation of ocean methane hydrates. Boxes show median and interquartile range; whiskers show 95% confidence interval; crosses mark the average change (0.1% trimmed); triangles mark the 0.5 percentile; squares mark the 99.5 percentile. Y-axis is truncated. Emissions and GDP/population growth are from RCP4.5-SSP2. The hazard rate b OMH is beta distributed unless otherwise specified. Monte Carlo sample size is 10,000. Table 8 and Figure 15 report the change in the SCC due to slowdown of the AMOC across the four hosing scenarios explored in (6). The SCC is lower across all quantiles in all four scenarios. (6) similarly found an increase in welfare in all four scenarios, using both damage functions fitted on the literature, and the FUND IAM. The decrease in the expected SCC is much larger in the 'Hadley 67%' scenario than in the other scenarios, at -5.4%. This reflects the much larger, two-thirds slowdown of the AMOC in this scenario. Notice that the decrease in the expected SCC is greater in the 'HADCM 7% scenario' than the 'BCM 24%' scenario, despite a smaller slowdown of the circulation. In general, the ordering of temperature effects of the three milder hosing scenarios varies across countries, so while the slowdown is smallest overall under 'HADCM 7%', the cooling effect is larger in some countries (see 6, Fig. 1).  Table 9 and Figure Table 10 and Figure 17 report the change in the SCC due to tipping points for different values of the parameter ϕ that determines whether climate damages work on the level or growth rate of income. Our main specification is an intermediate value of ϕ = 0.5. When ϕ = 1, representing pure levels damages, the expected SCC is lower at $26.94/tCO 2 , while the percentage increase in the expected SCC is slightly higher at 26.0%. As ϕ is increased across its range, the SCC increases, but the percentage increase in the expected SCC due to tipping points exhibits a small decrease. However, with a pure growth damages specification (ϕ = 0), the expected SCC is nearly a factor of 100 higher and the percentage increase in the expected SCC is also much higher, at 87.0%. Thus both the expected SCC and the effect of tipping points on it are highly sensitive to ϕ as it approaches its lower limit value of zero.

Levels versus growth damages
The high value of the expected SCC when ϕ = 0 is in the ballpark of those values reported by (95), who also used a pure growth damages specification.      Table 12 shows. That is, the expected SCC is higher both when the elasticity parameter is lower than our main specification and when it is higher. The effect of tipping points on the expected SCC is similar when the elasticity parameter is set to a very low value of 0.5, however when it is set to 2 the effect of tipping points is much larger, at 55.8%. Further inspection of Figure 19 shows that this large percentage difference in the expected SCC when the elasticity of marginal utility is 2 is driven by a few runs in the tail of the distribution, since the median percentage difference in the SCC due to tipping points is just 22.0%. The sensitivity of welfare estimates to the elasticity of marginal utility in cases where consumption per capita can be driven to near-subsistence levels is well known, having been identified by Weitzman as part of his 'Dismal Theorem' (97; 98; 99).  Table 13 and Figure 20 report the change in the SCC due to tipping points when a globally aggregated, non-market damage function is added to the model, and compares it with the main specification that excludes non-market damages. The expected SCC increases to   The distribution has a large positive skew. Most draws return a percentage change of between 0% and +50%, but a long tail of draws returns increases of 200% or more. The 95% confidence 24 ϕ is uniformly distributed between 0 and 1, the pure rate of time preference is triangular distributed with a minimum of 0.1%, a mode of 1% and a maximum of 2%, and the elasticity of marginal utility is also triangular distributed with a minimum of 0.5, mode of 1.5 and maximum of 2.

Non-market damages
interval is -0.3-186.0%. Eleven per cent of draws result in the SCC at least doubling. The expected change in the SCC is 42.8%. Figure 21: Frequency distribution of percentage changes in the SCC due to all tipping points combined, based on pooled sample of 32,000 Monte Carlo draws from a fractional factorial design. Table 14 and Figure 22 report the change in the SCC due to tipping points for different country savings rates. Recall that savings rates are exogenous and time-invariant, and in our main specification they are calibrated on mean country-specific savings rates between 2005 and 2015 according to World Bank data. As a simple sensitivity analysis, we shift each country's (time-invariant) savings rate up and down by two percentage points. Two percentage points represents roughly two standard deviations of the global average savings rate between 2005 and 2015. The expected SCC is lower in both cases. The effect of tipping points on the expected SCC is close to our main specification.

Spatial impacts of tipping points
This section maps how each individual tipping point affects the country-level SCC. For each figure, we normalise welfare changes to global mean consumption per capita to enable comparison with other SCC estimates in the literature. The specification further comprises RCP4.5-SSP2 emissions and GDP/population growth, and relies on a Monte Carlo sample size of 10,000. As before, we trim 0.1% of runs to ensure consistency.
There is a broad North-South divide in the effect of the PCF (Figure 23), reflecting whether the country experiencing an increase in temperature due to this tipping point is below or above the inflection point in its quadratic temperature-damages relationship (83).

Effect of tipping points on inequality
As discussed in the main text, tipping points affect inequality only marginally. Figure 31 plots the Lorenz curve with and without all tipping points, normalised to the global SCC for comparability. As can be seen, there is little change to how unequally climate impacts are distributed across the world population.

Analysis of Monte Carlo sample size and trimming the SCC distribution
Most of our results are obtained from a sample of size 10,000. Table 15 shows variation in some of our main output variables -the SCC, consumption/capita (mean and standard deviation), temperature and SLR -over 10 x 10,000 samples. By combining samples, we also show results for samples of size 20,000, 50,000 and 100,000.
The results exhibit a high degree of consistency across samples. Mean consumption per capita in 2100 has a standard deviation of 42 cents without tipping points and 83 cents with tipping points (i.e. this is the standard deviation of the sample means, rather than the within-sample standard deviation, which is also reported). The mean effect of tipping points on 2100 temperature varies by much less than 0.001 • C across samples and the mean effect of tipping points on 2100 SLR varies by much less than 1cm. The standard deviation of the expected SCC (0.1% trimmed) without tipping points is two cents across samples; with tipping points it is 27 cents. The difference in the expected SCC due to tipping points has a standard deviation of 0.8 percentage points across samples.
Lastly, Table 16 reports the difference in the expected SCC for different degrees of trimming/truncation of the SCC distribution. We trim the distribution of SCCs as a pragmatic way to discard implausible values, which can be the result of implausible combinations of input variables. Ideally one would truncate the input distributions, however there are so many that it is too difficult to establish which combinations are implausible a priori. As Table 16 shows, the change in the expected SCC from the inclusion of tipping points rises the less the distribution is trimmed, which reflects the skew in the SCC distribution, albeit the effect is not large. In our judgement, all but a handful of runs are legitimate, therefore we seek to trim the distribution as little as possible. From trial and error, we established that trimming 0.1% of the distribution (i.e. 10 runs out of 10,000) excluded large negative values of the SCC and some extremely large positive values.