Earthquake shaking, Geothermal exploration
BSc(Hons) VUW (1970); PhD VUW (1977)
Publications from 1990 - Now
Recent research projects
Understanding the Omori aftershock law (with Annemarie Christophersen)
We have extracted a set of aftershocks from the SCEDC earthquake catalogue for Southern California with a cut-off magnitude of 2.0. We fitted the modified Omori law to subsets of aftershock times during the first day after the mainshock for mainshocks and aftershocks between magnitudes 4.0 and 2.0 in magnitude bands of 0.5 magnitude units. For these subsets there is a low probability of missing aftershocks. We find a small c value of 2.7 minutes fits all bands i.e c is independent of mainshock magnitude. For an interval of 0.2 days following a mainshock, the c value falls to 1.4 minutes. We conclude that the value of c that reflects the initiation of the aftershock process is very small, probably less than one minute for all mainshocks considered here. We infer that larger c values typically determined in aftershock studies and seismicity modeling reflect missing aftershocks immediately after the mainshock. This means that c is a nuisance parameter in seismicity modeling, since the observed value does not relate to any physical process. We propose a different formulation of the Omori law.
A new model for earthquake recurrence (with Annemarie Christophersen)
A global seismicity catalogue is used to form a pooled database of 341 time intervals between successive earthquakes in 79 superclusters of earthquakes worldwide that were potentially causally related. The empirical distribution function of the inter-event times motivates the development of a new probability function for earthquake recurrence times. This is a mixture of two terms. The first term is an Omori's aftershock decay law of the form 1/t, modified with modulating exponential functions to remove the singularities at 0 and ¥. After experiments with exponentially modulated power laws, a best maximum likelihood fit for the second term was found with a simple exponential (Poisson) model. The resulting model has only two effective parameters: the proportion of events that are causally related and the Poisson time constant. The model is illustrated by applying it to a single supercluster of M ³ 7 earthquakes that have occurred in central New Zealand since 1840. Comparison with the global model suggests that a small number of large aftershocks may be missing from the New Zealand catalogue. As a general result, if more than about 100 days pass following a large earthquake without a second one as large, the probability of another is approximately the same as given by a Poisson model. As the elapsed time increases the ratio of annual probability to Poisson probability falls, possibly reaching less than 80% of Poisson in the New Zealand supercluster if the conclusion about missing aftershocks is correct.
Modelling changes in the level of Lake Taupo, New Zealand (with Tim Williams, Des Darby)
Principal Component Analysis was applied to a set of relative water level measurements made at 22 sites around Lake Taupo, New Zealand, in 37 surveys during 1986-96. Only a single mode was significantly above noise levels. This mode showed a subsidence of the Lake shore that decayed exponentially with time, with a time constant of about 12 years. The mode was well modeled by a Mogi point dilatation at a depth of 8 ± 1 km located beneath the point with the greatest rate of subsidence, which was about 8.5 ± 1 mm/yr, in the centre of the northern Lake shore. The model implies that the source contracted by 0.02 ± 0.002 km3 during 1986-96. Three model parameters, the time constant, the depth and the volume contraction, place constraints on the physical process or processes that caused the contraction and resulting Lake shore subsidence. We infer that magma was intruded into the crust beneath the north shore of the Lake at some time prior to the start of the Lake shore observations in 1979 and that the contraction was due to water leaving the melt because of decompression. For this to work the water must diffuse quickly away from the source, and we accordingly infer that the permeability of the overlying crust must be 10-15 m2 or greater. To match the observed source contraction, and assuming a 1% by weight fluid loss, the magma would have to have contained about 2.5 km3 or more of melt.
Kernel Estimation for Earthquake Occurrence Modelling (with Christian Stock)
The method of kernel estimation has been used to develop spatially continuous seismicity models (earthquake probability distributions) from a given earthquake catalogue. Our approach is adaptive kernel estimation, which uses a bandwidth parameter that is spatially variable. Its performance compared to kernel estimations with spatially invariant bandwidths suggests that (discrete) earthquake distributions require different degrees of local smoothing to provide useful spatial seismicity models. Using adaptive kernel estimation, the (local) indices of temporal dispersion of any earthquake probability distribution can be estimated and used to model the spatial probability distribution of main shocks. The application of these methods to New Zealand and Australian earthquake catalogues shows that the spatial features (earthquake clusters) in which the main shocks occurred have been reasonably stable throughout the observation period.
Model for Aftershock occurrence (with Annemarie Christophersen)
A homogeneous, global earthquake catalogue has been used to build a database of aftershock sequences. The catalogue has been searched for related events by a simple window in space and time. Spatial analysis of this database of aftershock sequences has lead to the definition of aftershock area A (km2) as a function of mainshock magnitude M: log10 A = M - (3.34 ± 0.03).
The area includes approximately 90% of events that are close in space and time to the mainshock. It may include background seismicity. The number of aftershocks within the model area, the abundance, has been analysed as a function of mainshock magnitude. We have found that the number of aftershocks following a mainshock of magnitude M follows a geometric distribution, i.e. the probability P(n) of having n aftershocks of magnitude greater than some threshold magnitude M0 is given by: P (n) = (1 - q) q n ( 0 < q < 1)
where q depends in a simple way on M. Using this we have derived simple expressions for the probability of an aftershock of any magnitude during an aftershock sequence, which can be used to provide insurers with a quantitative assessment of aftershock hazard and risk. We also calculate the conditional probability of large aftershocks, given the knowledge of the number of aftershocks in some initial period of time (e.g 1 day). This information can be used by emergency managers to assess the hazard from large aftershocks during the response and early recovery stages following a mainshock.