Publications
2021

CESCalibrate, Emulate, SampleCleary, Emmet, GarbunoInigo, Alfredo, Lan, Shiwei, Schneider, Tapio, and Stuart, Andrew MJournal of Computational Physics 2021
Many parameter estimation problems arising in applications can be cast in the framework of Bayesian inversion. This allows not only for an estimate of the parameters, but also for the quantification of uncertainties in the estimates. Often in such problems the parametertodata map is very expensive to evaluate, and computing derivatives of the map, or derivativeadjoints, may not be feasible. Additionally, in many applications only noisy evaluations of the map may be available. We propose an approach to Bayesian inversion in such settings that builds on the derivativefree optimization capabilities of ensemble Kalman inversion methods. The overarching approach is to first use ensemble Kalman sampling (EKS) to calibrate the unknown parameters to fit the data; second, to use the output of the EKS to emulate the parametertodata map; third, to sample from an approximate Bayesian posterior distribution in which the parametertodata map is replaced by its emulator. This results in a principled approach to approximate Bayesian inference that requires only a small number of evaluations of the (possibly noisy approximation of the) parametertodata map. It does not require derivatives of this map, but instead leverages the documented power of ensemble Kalman methods. Furthermore, the EKS has the desirable property that it evolves the parameter ensemble towards the regions in which the bulk of the parameter posterior mass is located, thereby locating them well for the emulation phase of the methodology. In essence, the EKS methodology provides a cheap solution to the design problem of where to place points in parameter space to efficiently train an emulator of the parametertodata map for the purposes of Bayesian inversion.
2020

Affine Invariant Interacting Langevin Dynamics for Bayesian InferenceGarbunoInigo, Alfredo, Nüsken, Nikolas, and Reich, SebastianSIAM Journal on Applied Dynamical Systems 2020
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on firstorder (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to highdimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradientfree approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradientfree and gradientbased ALDI is provided for a PDE constrained Bayesian inverse problem.

EKSInteracting Langevin Diffusions: Gradient Structure and Ensemble Kalman SamplerGarbunoInigo, Alfredo, Hoffmann, Franca, Li, Wuchen, and Stuart, Andrew MSIAM Journal on Applied Dynamical Systems 2020
Solving inverse problems without the use of derivatives or adjoints of the forward model is highly desirable in many applications arising in science and engineering. In this paper we propose a new version of such a methodology, a framework for its analysis, and numerical evidence of the practicality of the method proposed. Our starting point is an ensemble of overdamped Langevin diffusions which interact through a single preconditioner computed as the empirical ensemble covariance. We demonstrate that the nonlinear Fokker–Planck equation arising from the meanfield limit of the associated stochastic differential equation (SDE) has a novel gradient flow structure, built on the Wasserstein metric and the covariance matrix of the noisy flow. Using this structure, we investigate large time properties of the Fokker–Planck equation, showing that its invariant measure coincides with that of a single Langevin diffusion, and demonstrating exponential convergence to the invariant measure in a number of settings. We introduce a new noisy variant on ensemble Kalman inversion (EKI) algorithms found from the original SDE by replacing exact gradients with ensemble differences; this defines the ensemble Kalman sampler (EKS). Numerical results are presented which demonstrate its efficacy as a derivativefree approximate sampler for the Bayesian posterior arising from inverse problems.
2018

PhD ThesisStochastic Methods for Emulation, Calibration and Reliability Analysis of Engineering ModelsGarbunoInigo, Alfredo,2018
This dissertation examines the use of nonparametric Bayesian methods and advanced Monte Carlo algorithms for the emulation and reliability analysis of complex engineering computations. Firstly, the problem lies in the reduction of the computational cost of such models and the generation of posterior samples for the Gaussian Process’ (GP) hyperparameters. In a GP, as the flexibility of the mechanism to induce correlations among training points increases, the number of hyperparameters increases as well. This leads to multimodal posterior distributions. Typical variants of MCMC samplers are not designed to overcome multimodality. Maximum posterior estimates of hyperparameters, on the other hand, do not guarantee a global optimiser. This presents a challenge when emulating expensive simulators in light of small data. Thus, new MCMC algorithms are presented which allow the use of full Bayesian emulators by sampling from their respective multimodal posteriors. Secondly, in order for these complex models to be reliable, they need to be robustly calibrated to experimental data. History matching solves the calibration problem by discarding regions of input parameters space. This allows one to determine which configurations are likely to replicate the observed data. In particular, the GP surrogate model’s probabilistic statements are exploited, and the data assimilation process is improved. Thirdly, as sampling based methods are increasingly being used in engineering, variants of sampling algorithms in other engineering tasks are studied, that is reliabilitybased methods. Several new algorithms to solve these three fundamental problems are proposed, developed and tested in both illustrative examples and industrialscale models.
2017

BUSBayesian Updating and Model Class Selection With Subset SimulationDiazDelaO, FA, GarbunoInigo, A, Au, SK, and Yoshida, IComputer Methods in Applied Mechanics and Engineering 2017
Identifying the parameters of a model and rating competitive models based on measured data has been among the most important but challenging topics in modern science and engineering, with great potential of application in structural system identification, updating and development of high fidelity models. These problems in principle can be tackled using a Bayesian probabilistic approach, where the parameters to be identified are treated as uncertain and their inference information are given in terms of their posterior (i.e., given data) probability distribution. For complex models encountered in applications, efficient computational tools robust to the number of uncertain parameters in the problem are required for computing the posterior statistics, which can generally be formulated as a multidimensional integral over the space of the uncertain parameters. Subset Simulation (SuS) has been developed for solving reliability problems involving complex systems and it is found to be robust to the number of uncertain parameters. An analogy has been recently established between a Bayesian updating problem and a reliability problem, which opens up the possibility of efficient solution by SuS. The formulation, called BUS (Bayesian Updating with Structural reliability methods), is based on conventional rejection principle. Its theoretical correctness and efficiency requires the prudent choice of a multiplier, which has remained an open question. Motivated by the choice of the multiplier and its philosophical role, this paper presents a study of BUS. The work leads to a revised formulation that resolves the issues regarding the multiplier so that SuS can be implemented without knowing the multiplier. Examples are presented to illustrate the theory and applications.
2016

Transitional Annealed Adaptive Slice Sampling for Gaussian Process HyperParameter estimationGarbunoInigo, Alfredo, DiazDelaO, F. A., and Zuev, K. M.International Journal for Uncertainty Quantification 2016
Surrogate models have become ubiquitous in science and engineering for their capability of emulating expensive computer codes, necessary to model and investigate complex phenomena. Bayesian emulators based on Gaussian processes adequately quantify the uncertainty that results from the cost of the original simulator, and thus the inability to evaluate it on the whole input space. However, it is common in the literature that only a partial Bayesian analysis is carried out, whereby the underlying hyperparameters are estimated via gradientfree optimisation or genetic algorithms, to name a few methods. On the other hand, maximum a posteriori (MAP) estimation could discard important regions of the hyperparameter space. In this paper, we carry out a more complete Bayesian inference, that combines Slice Sampling with some recently developed Sequential Monte Carlo samplers. The resulting algorithm improves the mixing in the sampling through delayedrejection, the inclusion of an annealing scheme akin to Asymptotically Independent Markov Sampling and parallelisation via Transitional Markov Chain Monte Carlo. Examples related to the estimation of Gaussian process hyperparameters are presented.

Gaussian Process HyperParameter Estimation Using Parallel Asymptotically Independent Markov SamplingGarbunoInigo, A., DiazDelaO, F. A., and Zuev, K. M.Computational Statistics & Data Analysis 2016
Gaussian process emulators of computationally expensive computer codes provide fast statistical approximations to model physical processes. The training of these surrogates depends on the set of design points chosen to run the simulator. Due to computational cost, such training set is bound to be limited and quantifying the resulting uncertainty in the hyperparameters of the emulator by unimodal distributions is likely to induce bias. In order to quantify this uncertainty, this paper proposes a computationally efficient sampler based on an extension of Asymptotically Independent Markov Sampling, a recently developed algorithm for Bayesian inference. Structural uncertainty of the emulator is obtained as a byproduct of the Bayesian treatment of the hyperparameters. Additionally, the user can choose to perform stochastic optimisation to sample from a neighbourhood of the Maximum a Posteriori estimate, even in the presence of multimodality. Model uncertainty is also acknowledged through numerical stabilisation measures by including a nugget term in the formulation of the probability model. The efficiency of the proposed sampler is illustrated in examples where multimodal distributions are encountered.