ORCID Profile
0000-0001-6780-1679
Current Organisations
Medical University of South Carolina - College of Medicine
,
Durham University
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
In Research Link Australia (RLA), "Research Topics" refer to ANZSRC FOR and SEO codes. These topics are either sourced from ANZSRC FOR and SEO codes listed in researchers' related grants or generated by a large language model (LLM) based on their publications.
Seismology and Seismic Exploration | Geophysics | Natural Hazards | Pattern Recognition and Data Mining | Geodynamics | Electrical and Electromagnetic Methods in Geophysics |
Expanding Knowledge in the Earth Sciences | Oil and Gas Exploration | Natural Hazards in Marine Environments | Mineral Exploration not elsewhere classified | Natural Hazards in Urban and Industrial Environments
Publisher: Oxford University Press (OUP)
Date: 21-03-2012
Publisher: Oxford University Press (OUP)
Date: 04-01-2023
DOI: 10.1093/GJI/GGAD002
Abstract: Concerns raised by Okazaki & Ueda (2022) on the paper by Sambridge et al. (2022) are addressed. Two issues are discussed and some new numerical results presented. The first concerns whether the properties of the Wasserstein time-series misfit introduced in our earlier paper will translate to model space non-uniqueness in a seismic waveform inversion setting. It is argued that this is unlikely, given the special conditions, which must exist between all observed redicted seismic waveform pairs for non-uniqueness to result. The second issue discussed is the efficacy of using the Sliced Wasserstein algorithm of Bonneel et al. (2015) as an alternate to the marginal Wasserstein algorithm, as proposed by Okazaki & Ueda (2022). It is argued that for optimization-based waveform fitting, the Sliced Wasserstein algorithm is a viable alternate provided care is taken to ensure that conditions arise which do invalidate analytical derivative expressions of the resulting Wasserstein misfit. In practice, this would likely mean recasting the 2D Optimal Transport problem posed in our earlier paper onto unstructured grids.
Publisher: Elsevier BV
Date: 10-2013
Publisher: Oxford University Press (OUP)
Date: 22-04-2022
DOI: 10.1093/GJI/GGAC151
Abstract: We propose a new approach to measuring the agreement between two oscillatory time-series, such as seismic waveforms, and demonstrate that it can be used effectively in inverse problems. Our approach is based on Optimal Transport theory and the Wasserstein distance, with a novel transformation of the time-series to ensure that necessary normalization and positivity conditions are met. Our measure is differentiable, and can readily be used within an optimization framework. We demonstrate performance with a variety of synthetic ex les, including seismic source inversion, and observe substantially better convergence properties than achieved with conventional L2 misfits. We also briefly discuss the relationship between Optimal Transport and Bayesian inference.
Publisher: Elsevier BV
Date: 12-2014
Publisher: Copernicus GmbH
Date: 15-05-2023
DOI: 10.5194/EGUSPHERE-EGU23-12173
Abstract: Many Earth systems cannot be observed directly, or in isolation. Instead, we must infer their properties and characteristics from their signature in one or more datasets, using a variety of techniques (including those based on optimization, statistical methods, or machine learning). Development of these techniques is an area of focus for many geoscience researchers, and methodological advances can be instrumental in enhancing our understanding of the Earth.& & & & & & & & & In our experience, progress is substantially hindered by the absence of infrastructure facilitating communication between sub-disciplines. Researchers tend to focus on one area of the earth sciences & #8212 such as seismology, hydrology or oceanography & #8212 with only slow percolation of ideas and innovations from one area to another. Indeed, silos often exist even within these subfields. Testing new ideas on new problems is challenging as it requires the acquisition of domain knowledge, an often difficult and time-consuming endeavour with uncertain returns. Key questions that arise include: What is a relevant field data set, and how has it been processed? Which simulation package is most appropriate to predict the data? What would a 'good' model look like and what should it be able to resolve? What is the current best practice?To address this, we introduce the ESPRESSO project & #8212 a collection of Earth Science Problems for the Evaluation of Strategies, Solvers and Optimisers. It aims to provide& access to a suite of & #8216 test problems& #8217 , spanning a wide range of inference and inversion scenarios. Each test problem defines appropriate dataset(s) and simulation routines, accessible within a standardised Python interface. This will allow researchers to rapidly test new techniques across a spectrum of problems, share domain-specific inference problems and ultimately identify areas where there may be potential for fruitful collaboration and development. ESPRESSO is envisaged as an open, community-sourced project, and we invite contributions from across the geosciences.
Publisher: Oxford University Press (OUP)
Date: 02-2010
Publisher: MDPI AG
Date: 15-07-2019
DOI: 10.3390/W11071463
Abstract: Conceptual uncertainty is considered one of the major sources of uncertainty in groundwater flow modelling. In this regard, hypothesis testing is essential to increase system understanding by refuting alternative conceptual models. Often a stepwise approach, with respect to complexity, is promoted but hypothesis testing of simple groundwater models is rarely applied. We present an approach to model-based Bayesian hypothesis testing in a simple groundwater balance model, which involves optimization of a model in function of both parameter values and conceptual model through trans-dimensional s ling. We apply the methodology to the Wildman River area, Northern Territory, Australia, where we set up 32 different conceptual models. A factorial approach to conceptual model development allows for direct attribution of differences in performance to in idual uncertain components of the conceptual model. The method provides a screening tool for prioritizing research efforts while also giving more confidence to the predicted water balance compared to a deterministic water balance solution. We show that the testing of alternative conceptual models can be done efficiently with a simple additive and linear groundwater balance model and is best done relatively early in the groundwater modelling workflow.
Publisher: American Geophysical Union (AGU)
Date: 16-01-2013
DOI: 10.1029/2012GL054209
Publisher: American Geophysical Union (AGU)
Date: 20-06-2013
DOI: 10.1002/GRL.50615
Publisher: Cambridge University Press
Date: 30-06-2023
Publisher: American Geophysical Union (AGU)
Date: 08-2020
DOI: 10.1029/2020GC009240
Publisher: The Open Journal
Date: 26-08-2022
DOI: 10.21105/JOSS.04217
Publisher: Oxford University Press (OUP)
Date: 19-11-2019
DOI: 10.1093/GJI/GGZ520
Abstract: We develop a theoretical framework for framing and solving probabilistic linear(ized) inverse problems in function spaces. This is built on the statistical theory of Gaussian Processes, and allows results to be obtained independent of any basis, avoiding any difficulties associated with the fidelity of representation that can be achieved. We show that the results of Backus–Gilbert theory can be fully understood within our framework, although there is not an exact equivalence due to fundamental differences of philosophy between the two approaches. Nevertheless, our work can be seen to unify several strands of linear inverse theory, and connects it to a large body of work in machine learning. We illustrate the application of our theory using a simple ex le, involving determination of Earth’s radial density structure.
Publisher: Elsevier BV
Date: 08-2016
Publisher: Oxford University Press (OUP)
Date: 19-11-2019
DOI: 10.1093/GJI/GGZ521
Abstract: By starting from a general framework for probabilistic continuous inversion (developed in Part I) and introducing discrete basis functions, we obtain the well-known algorithms for probabilistic least-squares inversion set out by Tarantola & Valette (1982). In doing so, we establish a direct equivalence between the spatial covariance function that must be specified in continuous inversion, and the combination of basis functions and prior covariance matrix that must be chosen for discretised inversion. We show that the common choice of Tikhonov regularisation ($\\mathbf {C_m^{-1}} = \\sigma ^2\\mathbf {I}$) arises from a delta-function spatial covariance, and that this lies behind many of the artefacts commonly associated with discretised inversion. We show that other choices of spatial covariance function can be used to generate regularisation matrices yielding substantially better results, and permitting localisation of features even if global basis functions are employed. We are also able to offer a straightforward explanation for the spectral leakage problem identified by Tr ert & Snieder (1996).
Publisher: American Geophysical Union (AGU)
Date: 28-08-2016
DOI: 10.1002/2016GL069887
Publisher: Elsevier BV
Date: 11-2012
Publisher: Oxford University Press (OUP)
Date: 2023
Abstract: Regularized least-squares tomography offers a straightforward and efficient imaging method and has seen extensive application across various fields. However, it has a few drawbacks, such as (i) the regularization imposed during the inversion tends to give a smooth solution, which will fail to reconstruct a multi-scale model well or detect sharp discontinuities, (ii) it requires finding optimum control parameters, and (iii) it does not produce a sparse solution. This paper introduces ‘overcomplete tomography’, a novel imaging framework that allows high-resolution recovery with relatively few data points. We express our image in terms of an overcomplete basis, allowing the representation of a wide range of features and characteristics. Following the insight of ‘compressive sensing’, we regularize our inversion by imposing a penalty on the L1 norm of the recovered model, obtaining an image that is sparse relative to the overcomplete basis. We demonstrate our method with a synthetic and a real X-ray tomography ex le. Our experiments indicate that we can reconstruct a multi-scale model from only a few observations. The approach may also assist interpretation, allowing images to be decomposed into (for ex le) ‘global’ and ‘local’ structures. The framework presented here can find application across a wide range of fields, including engineering, medical and geophysical tomography.
Publisher: Oxford University Press (OUP)
Date: 27-06-2013
DOI: 10.1093/GJI/GGT220
Publisher: Oxford University Press (OUP)
Date: 09-11-2015
DOI: 10.1093/GJI/GGV440
Abstract: Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an ‘exact’ theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly—but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of ‘hybrid inversion’, in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple ex le, based on imaging the density structure of a vibrating string.
Publisher: Copernicus GmbH
Date: 15-05-2023
DOI: 10.5194/EGUSPHERE-EGU23-14047
Abstract: The Arabia-Eurasia collision, which started during Late Eocene (~35 Ma) or afterward across the Bitlis-Zagros suture, resulted in the formation of the Turkish & #8211 Iranian Plateau. Even though the average elevation throughout the plateau is around 2 km, the lithospheric structures between East Anatolian and the Iranian parts may be different. For instance, seismological studies suggest that East Anatolia is underlain by anomalously low-speed anomalies/hot asthenosphere whereas the Iranian part is associated with a rather thick ( km in some places) and strong lithosphere. Therefore, the area may be regarded as two distinct regions, namely, the East Anatolian Plateau and the Iranian Plateau. The growth of the plateau is mostly attributed to slab break-off combined with crustal shortening. Other processes often associated with the collision are lithospheric delamination and tectonic escape of microplates. These hypotheses suggested for the growth of the plateau are yet to fully explain the dualistic nature of the lithosphere in a region where elevations are roughly similar. In this work, by using 2D numerical experiments we aim to investigate the physical, geometric, and rheological parameters affecting the deformation of the plate during pre-, syn-, and post-collision. Our preliminary model results show an extension (up to ~70 km) on the terrane that is dragged behind the subducting plate, while the overriding plate undergoes shortening during the collision. The collision results in ~100 km of underthrusting in 50 Myrs which is in the range for the measured amounts of underthrusting across the plateau. We aim to expand the study by creating comparative model sets (i.e., models representing East Anatolia vs. models representing Iran) with a parameterization of varying lithospheric structures (e.g., different crust and mantle thicknesses), and strength profiles, which will help us to understand the kinematics and dynamics of such orogenic growth.
Publisher: Wiley
Date: 17-06-2020
Publisher: Oxford University Press (OUP)
Date: 10-03-2022
DOI: 10.1093/GJI/GGAC100
Abstract: Monte Carlo methods are widespread in geophysics and have proved to be powerful in non-linear inverse problems. However, they are associated with significant practical challenges, including long calculation times, large output ensembles of Earth models, and difficulties in the appraisal of the results. This paper addresses some of these challenges using generative models, a family of tools that have recently attracted much attention in the machine learning literature. Generative models can, in principle, learn a probability distribution from a set of given s les and also provide a means for rapid generation of new s les which follow that approximated distribution. These two features make them well suited for application to the outputs of Monte Carlo algorithms. In particular, training a generative model on the posterior distribution of a Bayesian inference problem provides two main possibilities. First, the number of parameters in the generative model is much smaller than the number of values stored in the ensemble, leading to large compression rates. Secondly, once trained, the generative model can be used to draw any number of s les, thereby eliminating the dependence on an often large and unwieldy ensemble. These advantages pave new pathways for the use of Monte Carlo ensembles, including improved storage and communication of the results, enhanced calculation of numerical integrals, and the potential for convergence assessment of the Monte Carlo procedure. Here, these concepts are initially demonstrated using a simple synthetic ex le that scales into higher dimensions. They are then applied to a large ensemble of shear wave velocity models of the core–mantle boundary, recently produced in a Monte Carlo study. These ex les demonstrate the effectiveness of using generative models to approximate posterior ensembles, and indicate directions to address various challenges in Monte Carlo inversion.
Publisher: Oxford University Press (OUP)
Date: 11-05-2010
Publisher: Wiley
Date: 23-07-2020
Publisher: Oxford University Press (OUP)
Date: 28-03-2016
DOI: 10.1093/GJI/GGW108
Publisher: Seismological Society of America (SSA)
Date: 30-06-2015
DOI: 10.1785/0120150010
Publisher: Geoscience Australia
Date: 2020
DOI: 10.11636/135130
Publisher: Copernicus GmbH
Date: 15-05-2023
DOI: 10.5194/EGUSPHERE-EGU23-7585
Abstract: Detailed sea-level budgets are now available for the 20th and 21st centuries, but separating the differing contributions of sea-level rise prior to 1900 remains difficult, in part due to additional temporal and vertical uncertainties associated with proxy records, and the spatially variable nature of driving processes.We present tide gauge and proxy reconstructions of sea level since 1700, and analyse their structure using Gaussian process modelling which allows for continuous reconstructions with fully quantified uncertainties. This enables the timing of accelerations, magnitude and rates of change to be determined, and in turn enables site-specific sea-level budgets to be derived. The contribution of different driving mechanisms (e.g., glacio-isostatic adjustment and sterodynamic changes) for each site is assessed, and the evolution of the barystatic contribution for the last 300 years is evaluated.
Publisher: Oxford University Press (OUP)
Date: 25-07-2018
DOI: 10.1093/GJI/GGY303
Abstract: Most linear inverse problems require regularization to ensure that robust and meaningful solutions can be found. Typically, Tikhonov-style regularization is used, whereby a preference is expressed for models that are somehow ‘small’ and/or ‘smooth’. The strength of such preferences is expressed through one or more d ing parameters, which control the character of the solution, and which must be set by the user. However, identifying appropriate values is often regarded as a matter of art, guided by various heuristics. As a result, such choices have often been the source of controversy and concern. By treating these as hyperparameters within a hierarchical Bayesian framework, we are able to obtain solutions that encompass the range of permissible regularization parameters. Furthermore, we show that these solutions are often well-approximated by those obtained via standard analysis using certain regularization choices which are—in a certain sense—optimal. We obtain algorithms for determining these optimal values in various cases of common interest, and show that they generate solutions with a number of attractive properties. A reference implementation of these algorithms, written in Python, accompanies this paper.
Publisher: Elsevier
Date: 2023
Publisher: Oxford University Press (OUP)
Date: 21-08-2012
Publisher: Elsevier BV
Date: 10-2021
Publisher: Oxford University Press (OUP)
Date: 27-12-2013
DOI: 10.1093/GJI/GGT473
Publisher: Oxford University Press (OUP)
Date: 10-04-2018
DOI: 10.1093/GJI/GGY141
Publisher: Springer Science and Business Media LLC
Date: 16-09-2019
Publisher: Copernicus GmbH
Date: 30-07-2018
Abstract: Abstract. “Learning algorithms” are a class of computational tool designed to infer information from a data set, and then apply that information predictively. They are particularly well suited to complex pattern recognition, or to situations where a mathematical relationship needs to be modelled but where the underlying processes are not well understood, are too expensive to compute, or where signals are over-printed by other effects. If a representative set of ex les of the relationship can be constructed, a learning algorithm can assimilate its behaviour, and may then serve as an efficient, approximate computational implementation thereof. A wide range of applications in geomorphometry and Earth surface dynamics may be envisaged, ranging from classification of landforms through to prediction of erosion characteristics given input forces. Here, we provide a practical overview of the various approaches that lie within this general framework, review existing uses in geomorphology and related applications, and discuss some of the factors that determine whether a learning algorithm approach is suited to any given problem.
Publisher: Elsevier BV
Date: 12-2021
Location: United States of America
Location: United States of America
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Start Date: 2020
End Date: 2022
Funder: Australian Research Council
View Funded ActivityStart Date: 2018
End Date: 2020
Funder: Australian Research Council
View Funded ActivityStart Date: 08-2020
End Date: 12-2023
Amount: $399,000.00
Funder: Australian Research Council
View Funded ActivityStart Date: 02-2018
End Date: 01-2021
Amount: $337,300.00
Funder: Australian Research Council
View Funded Activity