Plenaries

We are pleased to announce the following list of plenary speakers. Titles and abstracts are given below.

We will also have two tutorials on Sunday (1st of July), afternoon. They will take place at Inria Rennes.

  •  Alexander Keller (NVDIA) on Monte Carlo in deep learning
  • Gerardo Rubino (Inria) on rare event simulation.

Titles and abstracts

Christophe Andrieu: Some theoretical results concerning nonreversible Markov chain and Markov process Monte Carlo algorithms

Nonreversible processes have recently attracted renewed interest in the context of Monte Carlo simulation. Departure from reversibility offers new methodological opportunities and have shown great promise in some contexts but, in contrast with the reversible scenario, the theory behind these algorithms is largely underdeveloped. In this talk we will review recent results which provide some insights into what can be expected of their behaviour.

Pierre Henry-Labordere: Branching diffusion representation for nonlinear Cauchy problems and Monte Carlo approximation

We provide a probabilistic representations of the solution of some semilinear hyperbolic and high-order PDEs based on branching diffusions. These representations pave the way for a Monte-Carlo approximation of the solution, thus bypassing the curse of dimensionality. We illustrate the numerical implications in the context of some popular PDEs in physics such as nonlinear Klein-Gordon equation, a simplified scalar version of the Yang-Mills equation, a fourth-order nonlinear beam equation and the Gross-Pitaevskii PDE as an example of nonlinear Schrodinger equations.

Joint work with Nizar Touzi (CMAP, Ecole Polytechnique).

Eric Moulines: Langevin MCMC : theory and methods

In machine learning literature, a large number of problems amount to simulate a density which is log-concave (at least in the tails) but perhaps non smooth. Most of the research efforts  so far has been devoted to the Maximum A posteriori problem, which amounts to solve a high-dimensional convex (perhaps non smooth) program.

The purpose of this talk is to understand how we can use ideas which have proven very useful in machine learning community to solve large scale optimization problems to design efficient sampling algorithms, with « useful » convergence bounds.

In high dimension, first order methods (exploiting exclusively gradient information of the log-posterior) are a must. Most of the efficient algorithms know so far may be seen as variants of the gradient descent algorithms. A natural candidate is the “Unadjusted Langevin Algorithm”, which is derived from the Euler discretization of the Langevin diffusion, and may be seen as a noisy version of the gradient descent. By replacing the full gradient by stochastic approximation of the gradient evaluated over mini-batches, computationally efficient version may be derived (which have the same cost than a stochastic gradient algorithm). This algorithm may be generalized in the non-smooth case by « regularizing » the objective function. The Moreau-Yosida inf-convolution algorithm is an appropriate candidate in such case, because it does not modify the minimum value of the criterion while transforming a non smooth optimization problem in a smooth one. We will prove convergence results for these algorithms with explicit convergence bounds both in Wasserstein distance and in total variation. We will discuss in particular how these methods can be adapted to sample over a compact domain and for the computation of normalizing constants.

 

Marvin Nakayama: Quantile Estimation via a Combination of Conditional Monte Carlo and Latin Hypercube Sampling

Quantiles or percentiles are often used to assess risk in a variety of application areas. For example, financial analysts frequently
measure risk of a portfolio through a 0.99-quantile, which is known as a value-at-risk. For complex stochastic models, analytically
computing a quantile typically is not possible, so Monte Carlo simulation is employed.  In addition to providing a point estimate for a quantile, we also want to measure the simulation estimate’s sampling error, which is typically done by giving a confidence interval (CI) for the quantile. Indeed, the U.S. Nuclear Regulatory Commission (NRC) requires a licensee of a nuclear power plant
to account for sampling error when performing a probabilistic safety assessment. A licensee can demonstrate compliance with federal regulations using a `”95/95 criterion,” which entails establishing, with 95% confidence, that a 0.95-quantile lies below a mandated limit. Simple random sampling may result in a quantile estimator with an unusably wide CI, especially for an extreme quantile. Moreover, each simulation run of the model may require substantial
computation time, which motivates the use of variance-reduction techniques (VRTs). We discuss combining two well-known VRTs,
conditional Monte Carlo and Latin hypercube sampling, to estimate a quantile. The combination of the methods can work synergistically together for quantile estimation, greatly reducing the variance
obtained by either method by itself. In addition to devising a point estimator for the quantile when applying the combined approaches, we also describe how to construct asymptotically valid CIs for the quantile. Numerical results demonstrate the effectiveness of the methods.
This is joint work with Hui Dong.

Marvin’s slides: mcqmc2018-Nakayama

Barry Nelson: Selecting the Best Simulated System: Thinking Differently About an Old Problem

Discrete-event, stochastic simulation is typically employed to evaluate the feasibility of a system design; to assess a design’s sensitivity to unknowns; or to optimize system performance. When the speaker was taking his first simulation course in 1980, academic thinking about simulation optimization was just beginning to be influenced by the statistical methodologies of ranking & selection (R&S), which was pioneered for an entirely different setting by Shanti Gupta at Purdue and Bob Bechhofer at Cornell. Then Ph.D. student Dave Goldsman recognized the relevance to simulation, and nearly 40 years and hundreds of published papers later, R&S is a standard simulation tool for practitioners, a feature of many simulation languages, and still generating papers.
After some background on R&S, I attempt to answer these questions: Is the R&S problem still relevant? Are the usual goals of R&S sensible when the number of system designs becomes extremely large? Are the methods used to build R&S procedures incompatible with high-performance computing? In brief, should we think differently about selecting the best system?

Barry’s slides

Friedrich Pillichshammer: Discrepancy of digital sequences: new results on a classical QMC topic

Digital sequences are a topic that belongs to the foundations of QMC theory. Beside the Halton sequence these are prototypes of sequences with low discrepancy. First examples where given by Il’ya Meerovich Sobol’ and Henri Faurewith their famous constructions. But the unifying theory was developed later by Harald Niederreiter. Nowadays there are a magnitude of examples of digital sequences and it is classical knowledge, that the star discrepancy of the initial N elements of such sequences can achieve a convergence rate of order of magnitude (log N)^s/N, where s denotes the dimension. On the other hand, very little has been known about other norms of the discrepancy function of digital sequences, beside evident estimates in terms of star discrepancy.
In this talk we present some recent results about various types of discrepancy of digital sequences. This comprises: star discrepancy and weighted star discrepancy, L_p-discrepancy, discrepancy with respect to bounded mean oscillation and exponential Orlicz norms, as well as Sobolev, Besov and Triebel-Lizorkin norms with dominating mixed smoothness.

Friedrich’s slides: MCQMC2018-Pillischshammer

Clémentine Prieur: Dimension reduction of the input parameter space for multivariate potentially vector-valued functions

Many problems that arise in uncertainty quantification, e.g., integrating or approximating multivariate functions, suffer from the curse of dimensionality. The cost of computing a sufficiently accurate approximation grows indeed dramatically with the dimension of the input parameter space.
It thus seems important to identify and exploit some notion of low-dimensional structure as, e.g., the intrinsic dimension of the model. A function varying primarily along a few directions of the input parameter space is said of low intrinsic dimension. In that setting, algorithms for quantifying uncertainty focusing on these important directions are expected to reduce the overall cost.
A common approach to reducing a function’s input dimension is the truncated Karhunen-Loève decomposition, which exploits the correlation structure of the function’s input space. In the present talk, we propose to exploit not only input correlations but also the structure of the input-output map itself.
We will first focus the presentation on approaches based on global sensitivity analysis. The main drawback of global sensitivity analysis is the cost required to estimate sensitivity indices such as Sobol’ indices. It is the main reason why we turn to the notion of active subspaces defined as eigenspaces of the average outer product of the function’s gradient with itself. They capture the directions along which the function varies the most, in the sense of its output responding most strongly to input perturbations, in expectation over the input measure. In particular, we will present recent results  dealing with the framework of multivariate vector-valued functions.

Christoph Schwab: Highdimensional Quadrature in UQ for PDEs

We review recent results on high-dimensional numerical integration by QMC methods applied to direct and inverse uncertainty quantification for PDEs.  Particular attention will be paid to the interplay between sparsity of  (generalized) polynomial chaos expansions of integrand functions and dimension-independent convergence rates afforded by higher-order  QMC integration. Furthermore, we carefully expound the impact of  local support structure of representation systems upon the algorithmic complexity of QMC rule construction. A number of techniques such as  polynomial lattice rules and their scrambled versions,  and extrapolated lattice rules affording  fast matrix-vector multiplication, will be considered. We compare (heretically, perhaps, at MCQMC…)  dimension-independent QMC convergence rates
with rates afforded by integrand-adapted Smolyak type constructions. Applications to direct and Bayesian inverse UQ for PDEs  will be considered.
Numerical examples will comprise PDEs with log-gaussian inputs, and Bayesian shape identification problems in nondestructive
testing.

Christoph’s slides are here.

Tutorials:

Alexander Keller: Monte Carlo and Quasi-Monte Carlo Methods in Machine Learning

The recent progress in computer architecture enabled disruptive advances in artificial intelligence, which in turn created an overwhelming industrial and academic interest, especially in deep learning.
The tutorial will survey some principles of machine learning with a focus on deep neural networks and reinforcement learning. The relations between the field and high dimensional function approximation, information based complexity theory, and Monte Carlo and quasi-Monte Carlo methods offer important research opportunities for the Monte Carlo and quasi-Monte Carlo community.

Alex Keller’s slides

Gerardo Rubino: Introduction to rare event analysis using Monte Carlo

The analysis of rare events is one of the main difficulties that we encounter with Monte Carlo techniques: the standard (or crude, or naive) approach, while general, fails when addressing this task. More powerful methods (but less general ones) must then be employed. In this tutorial, we will introduce some fundamental families of techniques specialized in dealing with rare events. Since this is a broad field, we will use dependability evaluation as the main application area, but we will at least mention other important types of problems where rareness appear, and the associated available approaches to attack them. Dependability evaluation is itself a large domain, and several important classes of Monte Carlo techniques have proven successful with problems involving rare events, some of them very useful in other areas as well. In the presentation, we will illustrate the kind of efficiency that can be achieved using numerical examples coming from realistic models.