STA 290 Seminar: Bala Rajaratnam
STA 290 Seminar Series
DATE: Thursday, January 26th 2017, 4:10pm
LOCATION: MSB 1147, Colloquium Room. Refreshments at 3:30pm in MSB 4110
SPEAKER: Bala Rajaratnam, Dept Statistics, UC Davis
TITLE: “MCMC-Based Inference in the Era of Big Data: A Fundamental Analysis of the Convergence Complexity of High-Dimensional Chains”
ABSTRACT: Markov chain Monte Carlo (MCMC) lies at the core of modern Bayesian methodology, much of which would be impossible without it. Thus, the convergence properties of Markov chains relevant to MCMC have received significant attention, and in particular, proving (geometric) ergodicity is of critical interest. Nevertheless, current methods do not yield convergence rates sharp enough to permit a meaningful analysis in terms of the dimension of the parameter p and sample size n. Thus, a clear theoretical characterization of the behavior of modern Markov chains in high dimensions is not available.
In this paper, we first demonstrate that contemporary methods for establishing Markov chain convergence behavior have serious limitations when the dimension grows, such as in the so-called "Big Data" setting. We then employ novel theoretical approaches to rigorously establish the convergence behavior of Markov chains typical of high-dimensional MCMC. Unlike many comparable results in the literature, we obtain exact convergence rates in total variation distance by establishing upper and lower bounds that share the same rate constant in n and p. We are thus able to overcome some of the stated challenges in contemporary research on convergence of MCMC. We also show a universality result for the convergence rate across an entire spectrum of models. We then demonstrate the precise nature and severity of convergence problems that can occur in some important models when implemented in high dimensions, including phase transitions in the convergence rates in various n and p regimes. These convergence problems effectively eliminate the apparent safeguard of geometric ergodicity. We then demonstrate theoretical principles by which Markov chains can be analyzed to yield bounded geometric convergence rates (essentially recovering geometric ergodicity) even as the dimension p grows without bound. Additionally, we propose a diagnostic tool for establishing convergence (or the lack thereof) in high-dimensional MCMC. (Joint work with D.Sparks)