Michael Williams

Gather.town id
GW03
Poster Title
Nessai: Improved nested sampling with normalising flows for gravitational-wave inference
Institution
University of Glasgow
Abstract (short summary)
Our understanding of the sources that produce gravitational waves hinges on the ability to perform Bayesian inference on the physical parameters that describe them. However, for certain waveforms and systems, this is computationally expensive. In this talk, I will present our new nested sampling algorithm that incorporates machine learning to improve the efficiency. Our new algorithm called nessai is applicable to general problems. In this work, we focus on the application to compact binary coalescence and achieve a factor of two improvement compared to current methods in use by the LVK Collaboration.
Plain text (extended) Summary
Nested sampling is a widely used algorithm for Bayesian inference. However, it can be computationally expensive to run, often requiring hours to days to converge.

We introduce a new nested sampling algorithm to address the main bottleneck in nested sampling: proposing new samples within the current likelihood contour. Our nested sampler is called nessai (Nested Sampling with Artificial Intelligence) and it incorporates a machine learning-based proposal method.

In nessai, the current set of live points is used to train a normalising flow, a type of generative machine learning algorithm. The normalising flow learns an invertible mapping from the parameter space to a simpler latent space, which is typically Gaussian. Since the latent space is Gaussian, contours of equal likelihood are n-spheres. This allows us to construct likelihood contours in the latent space and, since the flow is invertible, map the contours back to the parameter space. We can also draw samples in the parameter space by simply sampling within the n-sphere in the latent space and applying the inverse mapping.

We incorporate this proposal into nessai and the result is an algorithm with three key stages:
1. Training: when the normalising flow is training on the current set of live points,
2. Population: new candidate points are drawn and stored in a pool for later use,
3. Proposal: new points are taken from the pool and accepted if their likelihood is greater than the current worst likelihood.

We verify nessai by analysing 128 simulated gravitational wave signals from compact binary coalescence. We start by performing a probability-probability test, which indicates if the posterior distributions recovered by nessai statistically reliable. We show the results in figure 5, where we see that nessai produces reliable estimates of the posteriors.

We also run another nested sampler, dynesty, as a point of comparison. We compare the log-evidences, total number of likelihood evaluations and total runtime in figure 6. We see good agreement between the log-evidences produced by the samplers. We also see that nessai requires on average 2.07 fewer likelihood evaluations than dynesty and converges on average 2.3 times faster. In figure 7, we compare the posterior distributions for a particular injection and see that the posterior distributions produced by each sampler closely match.

We highlight another advantage of nessai's design: it allows for easy parallelisation of the likelihood evaluation, which speeds up convergence. In figure 8, we show the time spent on each part of the core algorithm in nessai as a function of the number of threads. We see that for a single thread the run takes approximately 8 hours and as we increase to 16 threads, this reduces to approximately 4 hours. It also shows how the cost of training and population can limit the reduction in run time and become the main bottleneck when running with 16 threads.

Finally, we show one of the diagnostic plots included in nessai. This allows the user to identify issues during sampling without the need to run another analysis for comparison. In particular, we show a plot of the insertion indices. This plot relies on the fact that new live points should be inserted uniformly, and deviations for that indicate over- or under-convergence. This can be identified in a histogram.

So, why should you use nessai?
* It can speed up inference
* It includes diagnostics that allow you to identify issues during sampling
* It can easily parallelise the likelihood evaluation

How can you use nessai?
You can install nessai via pip. For further information please see our paper, documentation or GitHub.
URL
m.williams.4@research.gla.ac.uk, @michael_j_will, https://www.linkedin.com/in/michael-williams-693489161/