Skip to main content

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home
  • Log in
  • My Cart

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
Research Article

Global risk of big earthquakes has not recently increased

Peter M. Shearer and Philip B. Stark
  1. aScripps Institution of Oceanography, University of California, San Diego, La Jolla, CA 92093-0225; and
  2. bDepartment of Statistics, University of California, Berkeley, CA 94720-3860

See allHide authors and affiliations

PNAS January 17, 2012 109 (3) 717-721; https://doi.org/10.1073/pnas.1118525109
Peter M. Shearer
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: pshearer@ucsd.edu
Philip B. Stark
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  1. Contributed by Peter M. Shearer, November 12, 2011 (sent for review September 27, 2011)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Abstract

The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences—if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past.

  • earthquake statistics
  • seismology

The above-average rate of earthquakes of magnitude 8 and larger in recent years (e.g., ref. 1) has prompted speculation that the underlying rate of earthquake activity has changed (2–5), that is, that the observed apparent rate fluctuation is larger than would be expected for a homogeneous random process. Similarly, the recent 2011 Tohoku, Japan, M 9.0 earthquake, together with the 2004 M 9.0 Sumatra-Andaman earthquake and the 2010 M 8.8 Maule, Chile, earthquake, has fueled concern that these giant quakes may not have been independent events (see the discussion in ref. 6). Temporal earthquake clustering, including aftershock sequences, is well known at local and regional scales. However, whether earthquake catalogs, with aftershocks removed, follow a temporal Poisson process—the canonical “unpredictable” temporal process—is a long-standing area of research in seismology (7–11).

True earthquake rate changes at global scales would have important implications for assessments of seismic danger and our understanding of how faults interact. Here we ask whether the recent elevation in large earthquake activity is statistically significant and the larger question of whether the locally declustered global catalog is Poissonian. Using cataloged events from 1900 to 2011, we address this question in three ways: (i) plotting earthquake activity versus time to identify apparent anomalies in present and past rates of large earthquakes; (ii) performing Monte Carlo tests to estimate the probability of specific observed anomalies if seismicity were Poissonian with the observed average occurrence rate; and (iii) testing whether the locally declustered catalog is statistically distinguishable from a realization of a homogenous Poisson process, by using three statistical tests. Our main conclusion is that the observed fluctuations in the rate of large earthquakes (M≥8) is not surprising if global seismicity follows a Poisson process with a constant expected rate. Moreover, the recent rates of smaller earthquakes (7 ≤ M ≤ 8) are near their historic norms, and it is difficult to devise a physical mechanism that would increase the rate of the largest earthquakes but not the rate of smaller earthquakes. We conclude that the threat of large earthquake occurrence in regions far from recent enhanced activity is no higher today than it has been in the past.

Catalog and Local Declustering Method

We use moment magnitudes (Mw) and times from an earthquake catalog compiled for use by the US Geological Survey's Prompt Assessment of Global Earthquakes for Response system (PAGER-CAT) (12) for 1900 to 30 June 2008 seismicity, and the Preliminary Determination of Epicenters monthly and weekly (PDE and PDE-W) catalogs (available from the US Geological Survey National Earthquake Information Center web site, http://earthquake.usgs.gov/earthquakes) from 1 July 2008 to 13 August 2011. We consider only M≥7 events to reduce catalog completeness issues that may dominate results at smaller magnitudes. Nonuniformity in earthquake magnitude assignments is a serious issue in estimating earthquake rate changes (13, 14). PAGER-CAT attempts to use consistent moment magnitude estimates (see http://earthquake.usgs.gov/research/data/pager/PAGER_CAT_Sup.pdf), but the catalog might still have artificial changes in rate related to magnitude estimation.

Separating triggered aftershock seismicity from “background” seismicity is not trivial, and a variety of methods have been proposed. Because our focus here is on the global scale, we adopt the conservative and simple approach of removing events for which preceding larger earthquakes occur within 3 yr and 1,000 km. Declustering in this manner removes many events that might not traditionally be classified as aftershocks. For example, we remove both the March 2005 M 8.6 and September 2007 M 8.5 Sumatra earthquakes, retaining only the December 2004 M 9.0 Sumatra-Andaman earthquake. We do this because we want to consider only whether distant events are correlated, such as the 2010 M 8.8 Chile and 2011 M 9.0 Japan earthquakes, not whether regional-scale clustering may exist. Thus, it is important to decluster events at distances of less than about 1,000 km because they are not independent from a global perspective. We explore the effects of changing these declustering criteria below.

Magnitude Versus Time

Fig. 1 shows earthquake magnitudes versus time and smoothed yearly rates of M≥8, M≥7.5, and M≥7 activity. As expected, there are many more small earthquakes than large earthquakes, consistent with a Gutenberg-Richter (GR) power law relationship. The GR b value for the declustered catalog is approximately 1.3 for M≥7.5 earthquakes. There are only 16 earthquakes of M≥8.5, which limits the power of statistical tests of whether these giant events cluster in time. The eye is a poor judge of randomness and tends to find patterns in random sequences (e.g., ref. 15). Thus, the simple appearance of clustering should not be considered convincing evidence for nonrandomness.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

(A) Global earthquake magnitudes since 1900 after regional declustering of events. (B)–(D) Yearly rates of M≥8, M≥7.5, and M≥7 earthquakes. Rates are five-year running averages.

Nonetheless, past researchers have pointed to several possibly anomalous features that are visible in this plot. First, there were a disproportionate number of very large M≥8.5 earthquakes between 1950 and 1965. Second, there was a dearth of such large earthquakes in the 38 yr from 1966 to 2003. Finally, since 2004 there has been an elevated rate of M≥8 earthquakes: The five-year running average is at a record high, although there have been rates nearly as high in the past. These anomalies are evident only for the largest earthquakes and are much weaker or absent for smaller earthquakes. This observation implies that if the large earthquake clustering is caused by a physical mechanism, the mechanism must affect M≥8 earthquakes without changing the rate of smaller events. This property is inconsistent with the triggering behavior implied by aftershock sequences, which are observed to have Gutenberg-Richter magnitude-frequency relationships reflecting a preponderance of smaller events (e.g., ref. 16).

Monte Carlo Tests

How statistically significant are these anomalies for the large earthquakes? Addressing this question is complicated by the fact that virtually every realization of a random process will have features that appear anomalous. If the statistical test is chosen after looking at the data, the true significance level or p value can be substantially larger than the nominal value computed as if the test had been chosen before collecting the data (more about this topic later). Nonetheless, it is interesting to find the apparent probability of observed anomalies under various assumptions about the underlying process. For instance, one might assume that seismicity follows a homogeneous Poisson process with the expected rate equal to the observed rate in the catalog and generate a series of random catalogs. The fraction of these catalogs that have anomalies like those observed—or even more “extreme”—is an estimate of the p value of the hypothesis that seismicity satisfies the assumptions in the simulation, which include conditioning on the estimated rate.

This approach was used by Bufe and Perkins (2) to assess the 1950–1965 peak in great earthquake activity. They estimated that there was only a 4% chance that the three M≥9 earthquakes observed through 2001 would occur within an 11.4-yr period for a 100-yr catalog and that there was only a 0.2% chance that seven of the nine M≥8.6 earthquakes observed through 2001 would occur within any 14.5-yr period. We have not recalculated these probabilities, but note that they are likely underestimates for at least two reasons. First, the rate of M≥9 and M≥8.6 earthquakes that Bufe and Perkins used in their calculations is almost certainly lower than the true long-term rate, as activity since 2001 has shown. Second, and more importantly, they appear to have selected details of their statistical tests, such as the magnitude thresholds, to maximize the apparent anomaly. As mentioned above, this approach causes the p value or significance level to appear to be smaller than it really is, taking into account the post hoc selection.

Bufe and Perkins (2) also considered the gap in M≥8.4 earthquakes between 1966 and 2001 and estimated that a 36-yr gap would occur in only 0.5% of random catalogs of 18 M≥8.4 earthquakes. We show below that this gap is perhaps the most anomalous feature in the global catalog. However, the 0.5% value is misleadingly low because (i) Bufe and Perkins included among the 18 events a 23 July 1905 M 8.4 earthquake that is likely an aftershock of a nearby M 8.5 earthquake occurring 14 d earlier, and (ii) Bufe and Perkins apparently selected the M≥8.4 cutoff to maximize the apparent anomaly (there were three M 8.3 earthquakes in their catalog between 1966 and 2001).

We use Monte Carlo simulations to estimate the probability both of the recent elevated rate of large earthquake activity and of the gap that preceded it, under the null hypothesis that seismicity follows a Poisson process that generates exactly as many events as were in fact observed. That is, under the null hypothesis, the number of events is given and the times of these events are independent, identically distributed (iid) random variables, all with a uniform distribution on the interval in days [0, 40,767]. Our estimates are based on 100,000 random catalogs simulated from that joint distribution. The estimated probabilities are the fractions of those 100,000 catalogs that have the apparent anomaly at issue, for instance, the fraction that have at least a given number of events within an interval of a specified length.

Nine of the 75 (after local declustering) M≥8 earthquakes occurred in the 2,269-d period between the 23 December 2004 M 8.1 Macquarie earthquake and the 11 March 2011 M 9.0 Tohoku earthquake. Under the null hypothesis, there is about an 85% chance that at least nine of 75 events would occur within 2,269 d of each other: The recent elevated rate of large earthquakes is hardly surprising even if regionally declustered seismicity follows a homogeneous Poisson process.

Three of 16 M≥8.5 declustered earthquakes occur during the 2,266 d between the 26 December 2004 M 9.0 Sumatra earthquake and the Tohoku earthquake. Under the null hypothesis, there is about a 97% chance that at least three of 16 events will occur within 2,266 d of each other. Even if we cherry-pick the lower magnitude threshold to be 8.8 (the size of the 27 February 2010 Maule earthquake), so that three of six M≥8.8 events occur in a 2,266-d interval, this event concentration has a 14% chance under the null hypothesis that regionally declustered seismicity is Poisson.

The lack of M≥8.5 events in the approximately 40 yr between 4 February 1965 and 26 December 2004 is more anomalous than the recent elevated rate. Under the null hypothesis, the probability that 16 events in a 111-yr interval would contain such a long gap is only about 1.3%. However, this feature was selected in retrospect, and it is essentially always possible to find a feature of any specific realization of a random process that appears improbable—that only a small fraction of random realizations would have. Hence, this gap is hardly evidence that the underlying process is nonuniform.

To illustrate the effect of post hoc selection on nominal p values, we performed the following experiment. We generated 1,000 different catalogs of 330 events with random times uniformly distributed between 0 and 40,767 d and random magnitudes of 7.5 ≤ M ≤ 9.6 at 0.1 increments, assuming a b value of 1.3 (close to that observed for the declustered catalog). For each catalog we searched for event clusters, defined as the greatest concentration in time of n events of M≥Mmin, where the minimum magnitude Mmin varied from 8.0 to 9.0 in steps of 0.1, and the number of events in the cluster n ranges from 2 to 15. Using the Monte Carlo approach described above, for every (Mmin, n) pair, we estimated the probability of its cluster, given the total number of events of M≥Mmin in the random catalog, and found the “least likely” or “most surprising” cluster. We found that 91% of the 1,000 random catalogs had a cluster that should occur less than 10% of the time, 74% of the catalogs had a cluster that should occur less than 5% of the time, and 30% of the catalogs had a cluster that should occur less than 1% of the time. The (Mmin, n) values corresponding to the most surprising cluster differ for different catalogs. If the analyst selects the most anomalous feature in a specific dataset, the nominal p value, which ignores the fact that the feature was selected after looking at the data, is generally much smaller than the true p value, which accounts for that selection. This simulation considered only a single cluster of events, but searching for gaps or for more than one cluster would also result in a nominal p value much smaller than the true p value, because the surprising feature was chosen after looking at the data.

Tests of the Poisson Hypothesis

A more general question is whether the earthquake catalog, after removing regional clustering, is statistically distinguishable from a realization of a homogenous Poisson process. We consider three tests, all of which condition on the number of events after declustering, so that the times of those events are iid uniform random variables under the null hypothesis. For additional tests, see ref. 17.

The first test compares the empirical distribution of the times with the uniform distribution. For each magnitude threshold, we determine the times of (locally declustered) events of that magnitude or above. We then perform a Kolmogorov-Smirnov (KS) test (e.g., ref. 18) of the hypothesis that those times are a sample of iid uniform random variables, estimating the p value by simulation. The second and third tests use the chi-square statistic but in different ways. Both partition the observation period into equal-length windows (we used Nw = 100 windows). These tests are more complicated than the KS test and require ad hoc choices, such as the lengths of the windows; the p values depend on those choices.

The first chi-square test (the Poisson dispersion test) uses the fact that the conditional joint distribution of the number of events in different windows, given the total number of events, is multinomial with equal category probabilities. The test statistic for this test is proportional to the variance of the counts across windows. We estimated the p value by simulating 100,000 catalogs with the same number of events that the declustered catalogs contained and iid uniformly distributed event times.

The second chi-square test (the multinomial chi-square test) assesses how well the numbers of windows with various numbers of events agree with the numbers expected for homogeneous Poisson seismicity. To perform the multinomial chi-square test, we calculated the observed average rate λ of events per period. On the assumption that seismicity follows a Poisson distribution with the expected rate per period equal to the average rate λ, let K- denote the smallest integer such that the expected number of periods with no more than K- events is at least five, and let K+ denote the largest integer such that the expected number of periods with at least K events is at least five:* Embedded Image[1]Embedded Image[2]Define Embedded Image[3]Let XK- denote the number of periods that contain K- or fewer events. For k = K- + 1,…,K+ - 1, let Xk denote the number of periods that contain k events. And let XK+ denote the number of periods that contain K+ or more events. The test statistic is Embedded Image[4]The values of K- and K+ were 3 and 12 for the 759 M≥7.0 events, 1 and 7 for the 330 M≥7.5 events, and 0 and 2 for the 75 M≥8.0 events, respectively.

The Kolmogorov-Smirnov test and the Poisson dispersion test are sensitive to whether the rate varies with time, that is, to clustering. In contrast, the multinomial chi-square test is more sensitive to whether the distribution of the number of events in each window differs from the distribution expected if declustered seismicity were Poisson. For instance, if event times were equispaced, neither the KS test nor the Poisson dispersion test would reject the Poisson hypothesis, but the multinomial chi-square test would, given enough data.

Table 1 gives the results of all three tests, computed separately for minimum magnitudes of 7.0, 7.5, and 8.0. For the catalogs with aftershocks removed (i.e., Fig. 1), the p values range from 7.5% to 94%. If we remove all smaller events within 3 yr and 1,000 km, regardless of whether they are foreshocks or aftershocks, the p values range from 16.7% to 95.1%. For both magnitude thresholds and all three tests, the observed distribution of declustered event times is consistent with the hypothesis that declustered event times follow a homogeneous Poisson process. These results agree with those of Michael (19), who concluded that the global M≥7.5 and M≥8 catalogs (1900–2011), after aftershock removal, are well described by a Poisson process. Our smallest estimated p values, after declustering, occur for the M≥7 catalog, which has the largest number of events and is thus most sensitive to small rate changes, but even these values are not less than 5%.

View this table:
  • View inline
  • View popup
Table 1.

Estimated p values for the hypothesis that times of events in the original and regionally declustered catalogs are independent, identically distributed uniform random variables, for several hypothesis tests

One might ask what it would take to reject the hypothesis that regionally declustered seismicity follows a homogeneous Poisson process. Suppose the declustered catalog were extended by a year. How many more globally dispersed events of a given magnitude would have to occur in that year for the tests to reject the null hypothesis? For M≥8.5 events, the hypothesis would be rejected by the Poisson dispersion and multinomial chi-square tests if three more such events were to occur in the year following the end of the catalog, increasing the total from 16 to 19 events. The p values for the KS, Poisson dispersion (PD), and multinomial chi-square (MC) tests then would be about 16.2%, 0.9%, and 1.6%, respectively. For M≥8.0 events, the hypothesis would be rejected by the multinomial chi-square test if seven more such events were to occur in the year; the p values for the KS, PD, and MC tests then would be 6.0%, 5.3%, and 2.1%, respectively.

As mentioned above, it is essentially always possible in retrospect to find some feature of a dataset that would be prospectively unlikely under the null hypothesis, so it is essentially always possible—after looking at the data—to find or contrive some test that formally rejects the null hypothesis (in this case, for instance, a test on the basis of the longest gap between M≥8.5 events). But, as the simulations in Monte Carlo Tests show, the formal significance level or p value would not be meaningful, because it does not take into account the “data snooping” involved in selecting the test.

Sensitivity to Declustering Parameters

We have shown that the global catalog of large earthquakes, with aftershocks or foreshocks and aftershocks removed as described above, is statistically indistinguishable from a homogeneous Poisson process. However, it is well known that earthquakes cluster in local and regional catalogs: There are swarms, foreshocks, and aftershocks. Thus, we should expect the p value of the Poisson hypothesis to be lower for the original catalogs than for the declustered catalogs, as Table 1 confirms: The p values are generally rather smaller for the original catalogs than for the declustered catalogs (except for the multinomial chi-square test, which, as mentioned, measures something other than clustering). The original (undeclustered) catalog for M≥7.0 earthquakes is clearly inconsistent with the Poisson hypothesis. But the p values are not small for the original M≥7.5 and M≥8.0 catalogs. Even without declustering, the null hypothesis that times of large earthquakes follow a homogeneous Poisson process would not be rejected by any of these tests.

Because the original catalog for M 7.5 and larger events is consistent with the Poisson hypothesis, our conclusions for large earthquakes clearly do not depend strongly on details of the declustering method. For example, milder cutoffs of 400 d and 333 km give KS p values of 56.5% and 35.9% for the M≥8 and M≥7.5 declustered catalogs (aftershocks removed), respectively. The estimated p values depend on the choice of declustering parameters, but our main conclusions do not depend upon these details.

Discussion

Global clustering of large earthquakes is not statistically significant: The data are statistically consistent with the hypothesis that these events arise from a homogeneous Poisson process. However, it is possible that rate changes are at least partially responsible for the surplus of large earthquakes during 1950–1965 and 2004–2011 and for the intervening gap in activity. The long-term average rate of large earthquakes is uncertain. McCaffrey (20) argues on the basis of global subduction zone properties that the expected rate of M 9 earthquakes may be only 1–3 per century, implying that the five M 9 earthquakes observed since 1900 exceed the expected number. Given the low rate of large earthquakes, there will not be enough data to place tight constraints on the long-term average rate and possible rate changes for many years.

The stability of earthquake magnitude estimates is also critical. Individual magnitudes are typically uncertain to about 0.1 units. For a Gutenberg-Richter b value of one, a systematic increase in magnitude of 0.1 would increase the apparent rate of earthquakes by 25%. For our estimated b value of 1.3 for large earthquakes in the declustered catalog, the increase in apparent rate is about 35%. It is not our purpose here to revisit the discussion of possible systematic changes in catalog magnitude assignments (e.g., refs. 13, 14, and 21) but simply to note that magnitude estimates matter for evaluating possible rate changes. For example, Engdahl and Villaseñor (21) wrote, “Moreover, it was impossible to match the seismicity rates of the historical period to those of the modern period without making a reduction in the older magnitudes by about 0.2 units. However, final resolution of this problem is presently beyond the scope of this study so that, for example, the apparent higher seismicity rate during the 1940–1960 period (Fig. 3b) will remain problematic.” The uncertainty in catalog magnitude stability introduces hard-to-quantify errors in assessing possible long-term rate changes.

Because empirical earthquake data do not settle the question definitively, the case for global rate changes depends on the plausibility of physical mechanisms that might cause such changes. One possibility is that large earthquakes trigger other large earthquakes, like the triggering observed in local and regional aftershock sequences. Although static stress changes resulting from fault rupture decay rapidly with distance, dynamic triggering is often observed at much larger distances than the traditional aftershock zone (e.g., ref. 22). Surface waves generated by large earthquakes have been observed to trigger small (M < 5) earthquakes at global distances (e.g., refs. 23 and 24). However, Parsons and Velasco (25) find that the past 30 yr of seismicity shows no short-term (< 1 d) surface-wave triggering of larger (M > 5) earthquakes at distances beyond 600–1,000 km. They conclude that the regional risk of larger earthquakes is increased following a main shock but that the global risk is not.

Thus, although earthquake-to-earthquake triggering has been hypothesized to cause apparent global clustering of large earthquakes (5), such triggering of large earthquakes, if it exists, would need to behave differently from ordinary aftershock sequences, which typically follow both a Gutenberg-Richter distribution of event sizes and Omori’s law for decay of frequency with time. As discussed above, during periods with an above-average rate of large earthquakes, there has been no corresponding increase in the rate of smaller earthquakes. As just noted, Parsons and Velasco (25) show that there is no peak in large earthquake activity just after the surface waves from large earthquakes pass. And the above-average rates of large earthquake activity from 1950–1965 and 2004–2011 did not start with the flurry of activity typical in aftershock sequences. The largest and second-largest recorded earthquakes since 1900, the M 9.6 1960 Chile and the M 9.2 1964 Alaska events, were near the end of a period of enhanced activity, not near its beginning, and were followed by a notable gap in large earthquakes.

These observations imply that global earthquake clustering, if it has a physical explanation, is more analogous to seismic swarms than to main shock–aftershock sequences. Swarms are spatially compact clusters of events that occur for a limited time and typically do not begin with their largest event. They are difficult to explain with standard earthquake-to-earthquake triggering models and are often ascribed to slow slip or fluid flow near the swarm site (e.g., refs. 26 and 27); i.e., there is an underlying physical driving mechanism. But no physical mechanism has been proposed to explain possible global seismicity swarms that is physically plausible and that would not be detected in other observations. Although global cycles of earthquake energy release have been hypothesized (2, 28), there is as yet no evidence to support these ideas, other than apparent changes in seismicity rate, which, as we have shown here and Michael (19) has shown, are not statistically significant.

Another hypothesis is that stress diffused through postseismic relaxation of the asthenosphere triggers events on a global scale. Pollitz et al. (29) suggested that four great subduction zone earthquakes from 1952 to 1964 in the Kurils-Aleutians-Alaska arc caused a stress pulse that reached California in 1985 and may have increased seismicity rates there. However, the predicted stress changes at large distances are small compared to those usually thought to trigger earthquakes (e.g., ref. 30). An additional difficulty with this explanation for global clustering of large earthquakes is that a stress pulse would travel too slowly to cause the observed global grouping of large earthquakes. The 2010 M 8.8 Chile earthquake is too far away from the 2004 M 9 Sumatra earthquake for stress diffusion to be a factor.

Our conclusion that the global threat of large earthquakes has not recently increased is based both on the lack of statistical evidence that regionally declustered seismicity is temporally heterogeneous on a global scale and on the implausibility of physical mechanisms proposed to explain global clustering. The estimated global rate of very large (M > 9) earthquakes is still very uncertain because only five such events have occurred since 1900. The recent elevated rate of large earthquakes has increased estimates of large earthquake danger: The empirical rate of such events is higher than before. However, there is no evidence that the rate of the underlying process has changed. In other words, there is no evidence that the risk has changed, but our estimates of the risk have changed.

Although there is little evidence that the global threat of earthquake occurrence has changed in areas far from recent activity, the current threat of large earthquakes is certainly above its long-term average in regions like Sumatra, Chile, and Japan, which have recently experienced large earthquakes. Finally, of course, even if the danger has not increased recently, that does not mean that the ongoing danger is small or should be ignored.

Acknowledgments

We thank Thomas Jordan and Andrew Michael for stimulating and constructive reviews. This research was supported by the US Geological Survey National Earthquake Hazards Reduction Program and by the Southern California Earthquake Center.

Footnotes

  • ↵1To whom correspondence should be addressed. E-mail: pshearer{at}ucsd.edu.
  • Author contributions: P.M.S. and P.B.S. designed research, performed research, analyzed data, and wrote the paper.

  • The authors declare no conflict of interest.

  • See Commentary on page 651.

  • ↵*The threshold 5 was chosen because many textbooks state that the chi-square approximation to the null distribution of the chi-square statistic is adequate when the expected number of counts in every bin is at least 5. However, this method of selecting the categories for the chi-square test makes the number and nature of the categories depend on the observed data through the empirical rate of seismicity. Conditioning on the data in this way alters the null distribution of the test statistic, which we take into account by using simulation instead of relying on the chi-square distribution to find the p value.

Freely available online through the PNAS open access option.

References

  1. ↵
    1. Ammon CJ,
    2. Lay T,
    3. Simpson DW
    (2010) Great earthquakes and global seismic networks. Seismol Res Lett 81:965–971.
    OpenUrlCrossRef
  2. ↵
    1. Bufe CG,
    2. Perkins DM
    (2005) Evidence for a global seismic-moment release sequence. Bull Seismol Soc Am 95:833–843.
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Bufe CG,
    2. Perkins DM
    (2011) The 2011 Tohoku earthquake: Resumption of temporal clustering of Earth’s megaquakes. Seismol Res Lett 82:455, (abstr).
    OpenUrl
  4. ↵
    1. Taira TP,
    2. Silver PG,
    3. Niu F,
    4. Nadeau R
    (2009) Remote triggering of fault-strength changes on the San Andreas fault at Parkfield. Nature 461:636–640.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Brodsky EE
    (2009) The 2004–2008 worldwide superswarm. EOS Trans AGU 90, Fall Meeting (Suppl), abstract S53B-06.
  6. ↵
    1. Perkins S
    (2011) Are larger earthquakes a sign of the times? Nature, doi: 10.1038/news.2011.241.
  7. ↵
    1. Gardner JK,
    2. Knopoff L
    (1974) Is the sequence of earthquakes in southern California, with aftershocks removed, Poissonian? Bull Seismol Soc Am 64:1363–1367.
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Ogata Y
    (1988) Statistical models for earthquake occurrences and residual analysis for point processes. J Am Stat Assoc 83:9–27.
    OpenUrlCrossRef
  9. ↵
    1. Kagan YY,
    2. Jackson DD
    (1991) Long-term earthquake clustering. Geophys J Int 104:117–133.
    OpenUrl
  10. ↵
    1. Wyss M,
    2. Toya Y
    (2000) Is background seismicity produced at a stationary Poissonian rate? Bull Seismol Soc Am 90:1174–1187.
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Lombardi AM,
    2. Marzocchi W
    (2007) Evidence of clustering and nonstationarity in the time distribution of large worldwide earthquakes. J Geophys Res 112:B02303.
    OpenUrl
  12. ↵
    1. Allen TI,
    2. Marano KD,
    3. Earle PS,
    4. Wald DJ
    (2009) PAGER-CAT: A composite earthquake catalog for calibrating global fatality models. Seismol Res Lett 80:57–62.
    OpenUrlCrossRef
  13. ↵
    1. Haberman RE
    (1982) Consistency of teleseismic reporting since 1963. Bull Seismol Soc Am 72:93–111.
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Haberman RE
    (1991) Seismicity rate variations and systematic changes in magnitudes in teleseismic catalogs. Tectonophysics 193:277–289.
    OpenUrlCrossRef
  15. ↵
    1. Bar-Hillel M,
    2. Wagenaar WA
    (1991) The perception of randomness. Adv Appl Math 12:428–454.
    OpenUrl
  16. ↵
    1. Frohlich C,
    2. Davis SD
    (1993) Teleseismic b values; or, Much Ado about 1.0. J Geophys Res 98:631–644.
    OpenUrl
  17. ↵
    1. Brown LD,
    2. Zhao LH
    (2002) A test for the Poisson distribution, Sankhyā. Indian J Stat 64:611–625.
    OpenUrl
  18. ↵
    1. Lehmann EL
    (2005) Testing Statistical Hypotheses (Springer, New York), 3rd Ed.
  19. ↵
    1. Michael AJ
    (2011) Random variability explains apparent global clustering of large earthquakes. Geophys Res Lett 38:L21301.
    OpenUrlCrossRef
  20. ↵
    1. McCaffrey R
    (2008) Global frequency of magnitude 9 earthquakes. Geology 36:263–266.
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Engdahl ER,
    2. Villaseñor A
    (2002) International Handbook of Earthquake and Engineering Seismology, Global seismicity: 1900–1999 (Academic, New York) ISBN: 0-12-440652-1, Vol 81A, pp 665–690.
    OpenUrlCrossRef
  22. ↵
    1. Hill DP,
    2. et al.
    (1993) Seismicity remotely triggered by the magnitude 7.3 Landers, California, earthquake. Science 260:1617–1623.
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Gomberg J,
    2. Bodin P,
    3. Larson K,
    4. Dragert H
    (2004) Earthquake nucleation by transient deformations caused by the M = 7.9 Denali, Alaska, earthquake. Nature 427:621–624.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Velasco AA,
    2. Hernandez S,
    3. Parsons T,
    4. Panlow K
    (2008) Global ubiquity of dynamic earthquake triggering. Nat Geosci 1:375–379.
    OpenUrlCrossRef
  25. ↵
    1. Parsons T,
    2. Velasco AA
    (2011) Absence of remotely triggered large earthquakes beyond the mainshock region. Nat Geosci 4:312–316.
    OpenUrlCrossRef
  26. ↵
    1. Hainzl S
    (2004) Seismicity patterns of earthquake swarms due to fluid intrusion and stress triggering. Geophys J Int 159:1090–1096.
    OpenUrl
  27. ↵
    1. Vidale JE,
    2. Shearer PM
    (2006) A survey of 71 earthquake bursts across southern California: Exploring the role of pore fluid pressure fluctuations and aseismic slip as drivers. J Geophys Res 111:B05312.
    OpenUrl
  28. ↵
    1. Romanowicz B
    (1993) Spatiotemporal patterns of energy release of great earthquakes. Science 260:1923–1926.
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Pollitz FF,
    2. Bürgmann R,
    3. Romanowicz B
    (1998) Viscosity of oceanic asthenosphere inferred from remote triggering of earthquakes. Science 280:1245–1249.
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Kerr RA
    (1998) Can great earthquakes extend their reach? Science 280:1194–1195.
    OpenUrlAbstract/FREE Full Text
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Global risk of big earthquakes has not recently increased
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Global risk of big earthquakes has not recently increased
Peter M. Shearer, Philip B. Stark
Proceedings of the National Academy of Sciences Jan 2012, 109 (3) 717-721; DOI: 10.1073/pnas.1118525109

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Global risk of big earthquakes has not recently increased
Peter M. Shearer, Philip B. Stark
Proceedings of the National Academy of Sciences Jan 2012, 109 (3) 717-721; DOI: 10.1073/pnas.1118525109
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley

Article Classifications

  • Physical Sciences
  • Earth, Atmospheric, and Planetary Sciences

Related Articles

  • How many great earthquakes should we expect?
    - Jan 17, 2012
Proceedings of the National Academy of Sciences: 109 (3)
Table of Contents

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • Catalog and Local Declustering Method
    • Magnitude Versus Time
    • Monte Carlo Tests
    • Tests of the Poisson Hypothesis
    • Sensitivity to Declustering Parameters
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Setting sun over a sun-baked dirt landscape
Core Concept: Popular integrated assessment climate policy models have key caveats
Better explicating the strengths and shortcomings of these models will help refine projections and improve transparency in the years ahead.
Image credit: Witsawat.S.
Model of the Amazon forest
News Feature: A sea in the Amazon
Did the Caribbean sweep into the western Amazon millions of years ago, shaping the region’s rich biodiversity?
Image credit: Tacio Cordeiro Bicudo (University of São Paulo, São Paulo, Brazil), Victor Sacek (University of São Paulo, São Paulo, Brazil), and Lucy Reading-Ikkanda (artist).
Syrian archaeological site
Journal Club: In Mesopotamia, early cities may have faltered before climate-driven collapse
Settlements 4,200 years ago may have suffered from overpopulation before drought and lower temperatures ultimately made them unsustainable.
Image credit: Andrea Ricci.
Steamboat Geyser eruption.
Eruption of Steamboat Geyser
Mara Reed and Michael Manga explore why Yellowstone's Steamboat Geyser resumed erupting in 2018.
Listen
Past PodcastsSubscribe
Birds nestling on tree branches
Parent–offspring conflict in songbird fledging
Some songbird parents might improve their own fitness by manipulating their offspring into leaving the nest early, at the cost of fledgling survival, a study finds.
Image credit: Gil Eckrich (photographer).

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Subscribers
  • Librarians
  • Press
  • Site Map
  • PNAS Updates
  • FAQs
  • Accessibility Statement
  • Rights & Permissions
  • About
  • Contact

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490