Global risk of big earthquakes has not recently increased
See allHide authors and affiliations
Contributed by Peter M. Shearer, November 12, 2011 (sent for review September 27, 2011)

Abstract
The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences—if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past.
The above-average rate of earthquakes of magnitude 8 and larger in recent years (e.g., ref. 1) has prompted speculation that the underlying rate of earthquake activity has changed (2–5), that is, that the observed apparent rate fluctuation is larger than would be expected for a homogeneous random process. Similarly, the recent 2011 Tohoku, Japan, M 9.0 earthquake, together with the 2004 M 9.0 Sumatra-Andaman earthquake and the 2010 M 8.8 Maule, Chile, earthquake, has fueled concern that these giant quakes may not have been independent events (see the discussion in ref. 6). Temporal earthquake clustering, including aftershock sequences, is well known at local and regional scales. However, whether earthquake catalogs, with aftershocks removed, follow a temporal Poisson process—the canonical “unpredictable” temporal process—is a long-standing area of research in seismology (7–11).
True earthquake rate changes at global scales would have important implications for assessments of seismic danger and our understanding of how faults interact. Here we ask whether the recent elevation in large earthquake activity is statistically significant and the larger question of whether the locally declustered global catalog is Poissonian. Using cataloged events from 1900 to 2011, we address this question in three ways: (i) plotting earthquake activity versus time to identify apparent anomalies in present and past rates of large earthquakes; (ii) performing Monte Carlo tests to estimate the probability of specific observed anomalies if seismicity were Poissonian with the observed average occurrence rate; and (iii) testing whether the locally declustered catalog is statistically distinguishable from a realization of a homogenous Poisson process, by using three statistical tests. Our main conclusion is that the observed fluctuations in the rate of large earthquakes (M≥8) is not surprising if global seismicity follows a Poisson process with a constant expected rate. Moreover, the recent rates of smaller earthquakes (7 ≤ M ≤ 8) are near their historic norms, and it is difficult to devise a physical mechanism that would increase the rate of the largest earthquakes but not the rate of smaller earthquakes. We conclude that the threat of large earthquake occurrence in regions far from recent enhanced activity is no higher today than it has been in the past.
Catalog and Local Declustering Method
We use moment magnitudes (Mw) and times from an earthquake catalog compiled for use by the US Geological Survey's Prompt Assessment of Global Earthquakes for Response system (PAGER-CAT) (12) for 1900 to 30 June 2008 seismicity, and the Preliminary Determination of Epicenters monthly and weekly (PDE and PDE-W) catalogs (available from the US Geological Survey National Earthquake Information Center web site, http://earthquake.usgs.gov/earthquakes) from 1 July 2008 to 13 August 2011. We consider only M≥7 events to reduce catalog completeness issues that may dominate results at smaller magnitudes. Nonuniformity in earthquake magnitude assignments is a serious issue in estimating earthquake rate changes (13, 14). PAGER-CAT attempts to use consistent moment magnitude estimates (see http://earthquake.usgs.gov/research/data/pager/PAGER_CAT_Sup.pdf), but the catalog might still have artificial changes in rate related to magnitude estimation.
Separating triggered aftershock seismicity from “background” seismicity is not trivial, and a variety of methods have been proposed. Because our focus here is on the global scale, we adopt the conservative and simple approach of removing events for which preceding larger earthquakes occur within 3 yr and 1,000 km. Declustering in this manner removes many events that might not traditionally be classified as aftershocks. For example, we remove both the March 2005 M 8.6 and September 2007 M 8.5 Sumatra earthquakes, retaining only the December 2004 M 9.0 Sumatra-Andaman earthquake. We do this because we want to consider only whether distant events are correlated, such as the 2010 M 8.8 Chile and 2011 M 9.0 Japan earthquakes, not whether regional-scale clustering may exist. Thus, it is important to decluster events at distances of less than about 1,000 km because they are not independent from a global perspective. We explore the effects of changing these declustering criteria below.
Magnitude Versus Time
Fig. 1 shows earthquake magnitudes versus time and smoothed yearly rates of M≥8, M≥7.5, and M≥7 activity. As expected, there are many more small earthquakes than large earthquakes, consistent with a Gutenberg-Richter (GR) power law relationship. The GR b value for the declustered catalog is approximately 1.3 for M≥7.5 earthquakes. There are only 16 earthquakes of M≥8.5, which limits the power of statistical tests of whether these giant events cluster in time. The eye is a poor judge of randomness and tends to find patterns in random sequences (e.g., ref. 15). Thus, the simple appearance of clustering should not be considered convincing evidence for nonrandomness.
(A) Global earthquake magnitudes since 1900 after regional declustering of events. (B)–(D) Yearly rates of M≥8, M≥7.5, and M≥7 earthquakes. Rates are five-year running averages.
Nonetheless, past researchers have pointed to several possibly anomalous features that are visible in this plot. First, there were a disproportionate number of very large M≥8.5 earthquakes between 1950 and 1965. Second, there was a dearth of such large earthquakes in the 38 yr from 1966 to 2003. Finally, since 2004 there has been an elevated rate of M≥8 earthquakes: The five-year running average is at a record high, although there have been rates nearly as high in the past. These anomalies are evident only for the largest earthquakes and are much weaker or absent for smaller earthquakes. This observation implies that if the large earthquake clustering is caused by a physical mechanism, the mechanism must affect M≥8 earthquakes without changing the rate of smaller events. This property is inconsistent with the triggering behavior implied by aftershock sequences, which are observed to have Gutenberg-Richter magnitude-frequency relationships reflecting a preponderance of smaller events (e.g., ref. 16).
Monte Carlo Tests
How statistically significant are these anomalies for the large earthquakes? Addressing this question is complicated by the fact that virtually every realization of a random process will have features that appear anomalous. If the statistical test is chosen after looking at the data, the true significance level or p value can be substantially larger than the nominal value computed as if the test had been chosen before collecting the data (more about this topic later). Nonetheless, it is interesting to find the apparent probability of observed anomalies under various assumptions about the underlying process. For instance, one might assume that seismicity follows a homogeneous Poisson process with the expected rate equal to the observed rate in the catalog and generate a series of random catalogs. The fraction of these catalogs that have anomalies like those observed—or even more “extreme”—is an estimate of the p value of the hypothesis that seismicity satisfies the assumptions in the simulation, which include conditioning on the estimated rate.
This approach was used by Bufe and Perkins (2) to assess the 1950–1965 peak in great earthquake activity. They estimated that there was only a 4% chance that the three M≥9 earthquakes observed through 2001 would occur within an 11.4-yr period for a 100-yr catalog and that there was only a 0.2% chance that seven of the nine M≥8.6 earthquakes observed through 2001 would occur within any 14.5-yr period. We have not recalculated these probabilities, but note that they are likely underestimates for at least two reasons. First, the rate of M≥9 and M≥8.6 earthquakes that Bufe and Perkins used in their calculations is almost certainly lower than the true long-term rate, as activity since 2001 has shown. Second, and more importantly, they appear to have selected details of their statistical tests, such as the magnitude thresholds, to maximize the apparent anomaly. As mentioned above, this approach causes the p value or significance level to appear to be smaller than it really is, taking into account the post hoc selection.
Bufe and Perkins (2) also considered the gap in M≥8.4 earthquakes between 1966 and 2001 and estimated that a 36-yr gap would occur in only 0.5% of random catalogs of 18 M≥8.4 earthquakes. We show below that this gap is perhaps the most anomalous feature in the global catalog. However, the 0.5% value is misleadingly low because (i) Bufe and Perkins included among the 18 events a 23 July 1905 M 8.4 earthquake that is likely an aftershock of a nearby M 8.5 earthquake occurring 14 d earlier, and (ii) Bufe and Perkins apparently selected the M≥8.4 cutoff to maximize the apparent anomaly (there were three M 8.3 earthquakes in their catalog between 1966 and 2001).
We use Monte Carlo simulations to estimate the probability both of the recent elevated rate of large earthquake activity and of the gap that preceded it, under the null hypothesis that seismicity follows a Poisson process that generates exactly as many events as were in fact observed. That is, under the null hypothesis, the number of events is given and the times of these events are independent, identically distributed (iid) random variables, all with a uniform distribution on the interval in days [0, 40,767]. Our estimates are based on 100,000 random catalogs simulated from that joint distribution. The estimated probabilities are the fractions of those 100,000 catalogs that have the apparent anomaly at issue, for instance, the fraction that have at least a given number of events within an interval of a specified length.
Nine of the 75 (after local declustering) M≥8 earthquakes occurred in the 2,269-d period between the 23 December 2004 M 8.1 Macquarie earthquake and the 11 March 2011 M 9.0 Tohoku earthquake. Under the null hypothesis, there is about an 85% chance that at least nine of 75 events would occur within 2,269 d of each other: The recent elevated rate of large earthquakes is hardly surprising even if regionally declustered seismicity follows a homogeneous Poisson process.
Three of 16 M≥8.5 declustered earthquakes occur during the 2,266 d between the 26 December 2004 M 9.0 Sumatra earthquake and the Tohoku earthquake. Under the null hypothesis, there is about a 97% chance that at least three of 16 events will occur within 2,266 d of each other. Even if we cherry-pick the lower magnitude threshold to be 8.8 (the size of the 27 February 2010 Maule earthquake), so that three of six M≥8.8 events occur in a 2,266-d interval, this event concentration has a 14% chance under the null hypothesis that regionally declustered seismicity is Poisson.
The lack of M≥8.5 events in the approximately 40 yr between 4 February 1965 and 26 December 2004 is more anomalous than the recent elevated rate. Under the null hypothesis, the probability that 16 events in a 111-yr interval would contain such a long gap is only about 1.3%. However, this feature was selected in retrospect, and it is essentially always possible to find a feature of any specific realization of a random process that appears improbable—that only a small fraction of random realizations would have. Hence, this gap is hardly evidence that the underlying process is nonuniform.
To illustrate the effect of post hoc selection on nominal p values, we performed the following experiment. We generated 1,000 different catalogs of 330 events with random times uniformly distributed between 0 and 40,767 d and random magnitudes of 7.5 ≤ M ≤ 9.6 at 0.1 increments, assuming a b value of 1.3 (close to that observed for the declustered catalog). For each catalog we searched for event clusters, defined as the greatest concentration in time of n events of M≥Mmin, where the minimum magnitude Mmin varied from 8.0 to 9.0 in steps of 0.1, and the number of events in the cluster n ranges from 2 to 15. Using the Monte Carlo approach described above, for every (Mmin, n) pair, we estimated the probability of its cluster, given the total number of events of M≥Mmin in the random catalog, and found the “least likely” or “most surprising” cluster. We found that 91% of the 1,000 random catalogs had a cluster that should occur less than 10% of the time, 74% of the catalogs had a cluster that should occur less than 5% of the time, and 30% of the catalogs had a cluster that should occur less than 1% of the time. The (Mmin, n) values corresponding to the most surprising cluster differ for different catalogs. If the analyst selects the most anomalous feature in a specific dataset, the nominal p value, which ignores the fact that the feature was selected after looking at the data, is generally much smaller than the true p value, which accounts for that selection. This simulation considered only a single cluster of events, but searching for gaps or for more than one cluster would also result in a nominal p value much smaller than the true p value, because the surprising feature was chosen after looking at the data.
Tests of the Poisson Hypothesis
A more general question is whether the earthquake catalog, after removing regional clustering, is statistically distinguishable from a realization of a homogenous Poisson process. We consider three tests, all of which condition on the number of events after declustering, so that the times of those events are iid uniform random variables under the null hypothesis. For additional tests, see ref. 17.
The first test compares the empirical distribution of the times with the uniform distribution. For each magnitude threshold, we determine the times of (locally declustered) events of that magnitude or above. We then perform a Kolmogorov-Smirnov (KS) test (e.g., ref. 18) of the hypothesis that those times are a sample of iid uniform random variables, estimating the p value by simulation. The second and third tests use the chi-square statistic but in different ways. Both partition the observation period into equal-length windows (we used Nw = 100 windows). These tests are more complicated than the KS test and require ad hoc choices, such as the lengths of the windows; the p values depend on those choices.
The first chi-square test (the Poisson dispersion test) uses the fact that the conditional joint distribution of the number of events in different windows, given the total number of events, is multinomial with equal category probabilities. The test statistic for this test is proportional to the variance of the counts across windows. We estimated the p value by simulating 100,000 catalogs with the same number of events that the declustered catalogs contained and iid uniformly distributed event times.
The second chi-square test (the multinomial chi-square test) assesses how well the numbers of windows with various numbers of events agree with the numbers expected for homogeneous Poisson seismicity. To perform the multinomial chi-square test, we calculated the observed average rate λ of events per period. On the assumption that seismicity follows a Poisson distribution with the expected rate per period equal to the average rate λ, let K- denote the smallest integer such that the expected number of periods with no more than K- events is at least five, and let K+ denote the largest integer such that the expected number of periods with at least K events is at least five:* [1]
[2]Define
[3]Let XK- denote the number of periods that contain K- or fewer events. For k = K- + 1,…,K+ - 1, let Xk denote the number of periods that contain k events. And let XK+ denote the number of periods that contain K+ or more events. The test statistic is
[4]The values of K- and K+ were 3 and 12 for the 759 M≥7.0 events, 1 and 7 for the 330 M≥7.5 events, and 0 and 2 for the 75 M≥8.0 events, respectively.
The Kolmogorov-Smirnov test and the Poisson dispersion test are sensitive to whether the rate varies with time, that is, to clustering. In contrast, the multinomial chi-square test is more sensitive to whether the distribution of the number of events in each window differs from the distribution expected if declustered seismicity were Poisson. For instance, if event times were equispaced, neither the KS test nor the Poisson dispersion test would reject the Poisson hypothesis, but the multinomial chi-square test would, given enough data.
Table 1 gives the results of all three tests, computed separately for minimum magnitudes of 7.0, 7.5, and 8.0. For the catalogs with aftershocks removed (i.e., Fig. 1), the p values range from 7.5% to 94%. If we remove all smaller events within 3 yr and 1,000 km, regardless of whether they are foreshocks or aftershocks, the p values range from 16.7% to 95.1%. For both magnitude thresholds and all three tests, the observed distribution of declustered event times is consistent with the hypothesis that declustered event times follow a homogeneous Poisson process. These results agree with those of Michael (19), who concluded that the global M≥7.5 and M≥8 catalogs (1900–2011), after aftershock removal, are well described by a Poisson process. Our smallest estimated p values, after declustering, occur for the M≥7 catalog, which has the largest number of events and is thus most sensitive to small rate changes, but even these values are not less than 5%.
Estimated p values for the hypothesis that times of events in the original and regionally declustered catalogs are independent, identically distributed uniform random variables, for several hypothesis tests
One might ask what it would take to reject the hypothesis that regionally declustered seismicity follows a homogeneous Poisson process. Suppose the declustered catalog were extended by a year. How many more globally dispersed events of a given magnitude would have to occur in that year for the tests to reject the null hypothesis? For M≥8.5 events, the hypothesis would be rejected by the Poisson dispersion and multinomial chi-square tests if three more such events were to occur in the year following the end of the catalog, increasing the total from 16 to 19 events. The p values for the KS, Poisson dispersion (PD), and multinomial chi-square (MC) tests then would be about 16.2%, 0.9%, and 1.6%, respectively. For M≥8.0 events, the hypothesis would be rejected by the multinomial chi-square test if seven more such events were to occur in the year; the p values for the KS, PD, and MC tests then would be 6.0%, 5.3%, and 2.1%, respectively.
As mentioned above, it is essentially always possible in retrospect to find some feature of a dataset that would be prospectively unlikely under the null hypothesis, so it is essentially always possible—after looking at the data—to find or contrive some test that formally rejects the null hypothesis (in this case, for instance, a test on the basis of the longest gap between M≥8.5 events). But, as the simulations in Monte Carlo Tests show, the formal significance level or p value would not be meaningful, because it does not take into account the “data snooping” involved in selecting the test.
Sensitivity to Declustering Parameters
We have shown that the global catalog of large earthquakes, with aftershocks or foreshocks and aftershocks removed as described above, is statistically indistinguishable from a homogeneous Poisson process. However, it is well known that earthquakes cluster in local and regional catalogs: There are swarms, foreshocks, and aftershocks. Thus, we should expect the p value of the Poisson hypothesis to be lower for the original catalogs than for the declustered catalogs, as Table 1 confirms: The p values are generally rather smaller for the original catalogs than for the declustered catalogs (except for the multinomial chi-square test, which, as mentioned, measures something other than clustering). The original (undeclustered) catalog for M≥7.0 earthquakes is clearly inconsistent with the Poisson hypothesis. But the p values are not small for the original M≥7.5 and M≥8.0 catalogs. Even without declustering, the null hypothesis that times of large earthquakes follow a homogeneous Poisson process would not be rejected by any of these tests.
Because the original catalog for M 7.5 and larger events is consistent with the Poisson hypothesis, our conclusions for large earthquakes clearly do not depend strongly on details of the declustering method. For example, milder cutoffs of 400 d and 333 km give KS p values of 56.5% and 35.9% for the M≥8 and M≥7.5 declustered catalogs (aftershocks removed), respectively. The estimated p values depend on the choice of declustering parameters, but our main conclusions do not depend upon these details.
Discussion
Global clustering of large earthquakes is not statistically significant: The data are statistically consistent with the hypothesis that these events arise from a homogeneous Poisson process. However, it is possible that rate changes are at least partially responsible for the surplus of large earthquakes during 1950–1965 and 2004–2011 and for the intervening gap in activity. The long-term average rate of large earthquakes is uncertain. McCaffrey (20) argues on the basis of global subduction zone properties that the expected rate of M 9 earthquakes may be only 1–3 per century, implying that the five M 9 earthquakes observed since 1900 exceed the expected number. Given the low rate of large earthquakes, there will not be enough data to place tight constraints on the long-term average rate and possible rate changes for many years.
The stability of earthquake magnitude estimates is also critical. Individual magnitudes are typically uncertain to about 0.1 units. For a Gutenberg-Richter b value of one, a systematic increase in magnitude of 0.1 would increase the apparent rate of earthquakes by 25%. For our estimated b value of 1.3 for large earthquakes in the declustered catalog, the increase in apparent rate is about 35%. It is not our purpose here to revisit the discussion of possible systematic changes in catalog magnitude assignments (e.g., refs. 13, 14, and 21) but simply to note that magnitude estimates matter for evaluating possible rate changes. For example, Engdahl and Villaseñor (21) wrote, “Moreover, it was impossible to match the seismicity rates of the historical period to those of the modern period without making a reduction in the older magnitudes by about 0.2 units. However, final resolution of this problem is presently beyond the scope of this study so that, for example, the apparent higher seismicity rate during the 1940–1960 period (Fig. 3b) will remain problematic.” The uncertainty in catalog magnitude stability introduces hard-to-quantify errors in assessing possible long-term rate changes.
Because empirical earthquake data do not settle the question definitively, the case for global rate changes depends on the plausibility of physical mechanisms that might cause such changes. One possibility is that large earthquakes trigger other large earthquakes, like the triggering observed in local and regional aftershock sequences. Although static stress changes resulting from fault rupture decay rapidly with distance, dynamic triggering is often observed at much larger distances than the traditional aftershock zone (e.g., ref. 22). Surface waves generated by large earthquakes have been observed to trigger small (M < 5) earthquakes at global distances (e.g., refs. 23 and 24). However, Parsons and Velasco (25) find that the past 30 yr of seismicity shows no short-term (< 1 d) surface-wave triggering of larger (M > 5) earthquakes at distances beyond 600–1,000 km. They conclude that the regional risk of larger earthquakes is increased following a main shock but that the global risk is not.
Thus, although earthquake-to-earthquake triggering has been hypothesized to cause apparent global clustering of large earthquakes (5), such triggering of large earthquakes, if it exists, would need to behave differently from ordinary aftershock sequences, which typically follow both a Gutenberg-Richter distribution of event sizes and Omori’s law for decay of frequency with time. As discussed above, during periods with an above-average rate of large earthquakes, there has been no corresponding increase in the rate of smaller earthquakes. As just noted, Parsons and Velasco (25) show that there is no peak in large earthquake activity just after the surface waves from large earthquakes pass. And the above-average rates of large earthquake activity from 1950–1965 and 2004–2011 did not start with the flurry of activity typical in aftershock sequences. The largest and second-largest recorded earthquakes since 1900, the M 9.6 1960 Chile and the M 9.2 1964 Alaska events, were near the end of a period of enhanced activity, not near its beginning, and were followed by a notable gap in large earthquakes.
These observations imply that global earthquake clustering, if it has a physical explanation, is more analogous to seismic swarms than to main shock–aftershock sequences. Swarms are spatially compact clusters of events that occur for a limited time and typically do not begin with their largest event. They are difficult to explain with standard earthquake-to-earthquake triggering models and are often ascribed to slow slip or fluid flow near the swarm site (e.g., refs. 26 and 27); i.e., there is an underlying physical driving mechanism. But no physical mechanism has been proposed to explain possible global seismicity swarms that is physically plausible and that would not be detected in other observations. Although global cycles of earthquake energy release have been hypothesized (2, 28), there is as yet no evidence to support these ideas, other than apparent changes in seismicity rate, which, as we have shown here and Michael (19) has shown, are not statistically significant.
Another hypothesis is that stress diffused through postseismic relaxation of the asthenosphere triggers events on a global scale. Pollitz et al. (29) suggested that four great subduction zone earthquakes from 1952 to 1964 in the Kurils-Aleutians-Alaska arc caused a stress pulse that reached California in 1985 and may have increased seismicity rates there. However, the predicted stress changes at large distances are small compared to those usually thought to trigger earthquakes (e.g., ref. 30). An additional difficulty with this explanation for global clustering of large earthquakes is that a stress pulse would travel too slowly to cause the observed global grouping of large earthquakes. The 2010 M 8.8 Chile earthquake is too far away from the 2004 M 9 Sumatra earthquake for stress diffusion to be a factor.
Our conclusion that the global threat of large earthquakes has not recently increased is based both on the lack of statistical evidence that regionally declustered seismicity is temporally heterogeneous on a global scale and on the implausibility of physical mechanisms proposed to explain global clustering. The estimated global rate of very large (M > 9) earthquakes is still very uncertain because only five such events have occurred since 1900. The recent elevated rate of large earthquakes has increased estimates of large earthquake danger: The empirical rate of such events is higher than before. However, there is no evidence that the rate of the underlying process has changed. In other words, there is no evidence that the risk has changed, but our estimates of the risk have changed.
Although there is little evidence that the global threat of earthquake occurrence has changed in areas far from recent activity, the current threat of large earthquakes is certainly above its long-term average in regions like Sumatra, Chile, and Japan, which have recently experienced large earthquakes. Finally, of course, even if the danger has not increased recently, that does not mean that the ongoing danger is small or should be ignored.
Acknowledgments
We thank Thomas Jordan and Andrew Michael for stimulating and constructive reviews. This research was supported by the US Geological Survey National Earthquake Hazards Reduction Program and by the Southern California Earthquake Center.
Footnotes
- ↵1To whom correspondence should be addressed. E-mail: pshearer{at}ucsd.edu.
Author contributions: P.M.S. and P.B.S. designed research, performed research, analyzed data, and wrote the paper.
The authors declare no conflict of interest.
See Commentary on page 651.
↵*The threshold 5 was chosen because many textbooks state that the chi-square approximation to the null distribution of the chi-square statistic is adequate when the expected number of counts in every bin is at least 5. However, this method of selecting the categories for the chi-square test makes the number and nature of the categories depend on the observed data through the empirical rate of seismicity. Conditioning on the data in this way alters the null distribution of the test statistic, which we take into account by using simulation instead of relying on the chi-square distribution to find the p value.
Freely available online through the PNAS open access option.
References
- ↵
- ↵
- Bufe CG,
- Perkins DM
- ↵
- Bufe CG,
- Perkins DM
- ↵
- ↵
- Brodsky EE
- ↵
- ↵
- Gardner JK,
- Knopoff L
- ↵
- ↵
- Kagan YY,
- Jackson DD
- ↵
- Wyss M,
- Toya Y
- ↵
- Lombardi AM,
- Marzocchi W
- ↵
- ↵
- Haberman RE
- ↵
- ↵
- Bar-Hillel M,
- Wagenaar WA
- ↵
- Frohlich C,
- Davis SD
- ↵
- Brown LD,
- Zhao LH
- ↵
- Lehmann EL
- ↵
- ↵
- McCaffrey R
- ↵
- ↵
- Hill DP,
- et al.
- ↵
- ↵
- ↵
- ↵
- Hainzl S
- ↵
- Vidale JE,
- Shearer PM
- ↵
- Romanowicz B
- ↵
- Pollitz FF,
- Bürgmann R,
- Romanowicz B
- ↵
- Kerr RA
Citation Manager Formats
Article Classifications
- Physical Sciences
- Earth, Atmospheric, and Planetary Sciences
Related Articles
- How many great earthquakes should we expect?- Jan 17, 2012