James Pearson

Career Stage
Student (postgraduate)
Poster Abstract

Strong galaxy-galaxy gravitational lensing is the distortion of the paths of light rays from a background galaxy into arcs or rings as viewed from Earth, caused by the gravitational field of an intervening foreground lens galaxy. Lensing provides a useful way of investigating the properties of distant galaxies and the early Universe, but to do so requires accurate modelling of the lens' mass profile. Conventionally this is done through relatively slow parametric techniques to work out the mass profile parameters. However, over the next few years new surveys such as Euclid will produce images of tens of thousands of gravitational lenses, so a more efficient method is needed to cope with such a large data set.

This project aims to use machine learning to develop a fast, automated approach to predict mass profile parameters straight from images of these lenses, through training a convolutional neural network (CNN). This CNN can carry out the complex task of modelling strong lens systems with similar accuracy to parametric techniques but far more quickly. Different aspects of training and testing have been investigated along with comparisons with conventional modelling, applied mainly to images with characteristics of those expected by the Euclid survey.

Plain text summary
Strong Lensing with Neural Networks

James Pearson, Jacob Maresca, Nan Li, Simon Dye

Introduction

Strong galaxy-galaxy gravitational lensing is the distortion of the paths of light rays from a background galaxy into arcs or rings as viewed from Earth, caused by the gravitational field of an intervening foreground lens galaxy.
Lensing provides a useful way of investigating the properties of distant galaxies and the early Universe, but to do so requires accurate modelling of the lens' mass profile. Conventionally this is done through relatively slow parametric techniques to work out the mass profile parameters.

Project Overview

To date, several hundred strong lenses have been found across various surveys. However, over the next few years, surveys such as Euclid and the Legacy Survey of Space and Time (LSST) will generate billions of images containing many tens of thousands of lensing systems, so a more efficient method is needed to cope with such a large data set.
Hence, this project aims to use machine learning to develop a fast, automated approach to model strong gravitational lenses straight from images, through training a convolutional neural network (CNN). We aim to investigate the effectiveness of using CNNs to estimate lens profile parameters when applied to upcoming survey-style images, and compare this to conventional parameter-fitting techniques.

Convolutional Neural Networks (CNNs)

CNNs are a subset of neural networks that have grid-like layers mainly for analysing images, and apply filters in order to extract information. CNNs can be improved through training, typically requiring a minimum of tens of thousands of training images. As not enough images of real lenses exist, they must be simulated instead.

Investigation

The CNN was trained on 50,000 images generated to resemble expected observations by Euclid (VIS band) and LSST (g, r, & i bands). Containing six convolutional layers, the CNN learned to predict values for the lensing galaxies' Einstein radii (size of the ring), and complex ellipticity components (which can be converted to ellipticity and orientation).
The CNN is now at a stage where it can accurately measure mass profile parameters for image catalogues simulated in the style of expected LSST and Euclid observations. While network performance improved for Euclid images over single-band LSST images, it did equally well or better than Euclid when given multi-band LSST g,r,i images, which allow it to more easily distinguish between the lens and source.
The investigation provided other insights as well that can inform future training. For example, CNN errors are shown below when test images are binned by ellipticity. We see that while more elliptical lenses make it easier to obtain orientation (as expected), the other parameters become increasingly harder to predict.

Comparing to conventional fitting

The CNN was retrained on a larger set of more complex images. We compared the CNN to a conventional parameter-fitting technique, PyAutoLens, for different test sets, such as images with real HUDF sources, with & without line-of-sight structure (LOSS). We also tried a combination of the two techniques, using CNN predictions as priors for PyAutoLens.
First, the CNN was fine-tuned so that the uncertainties it predicted were suitable, i.e. that its 1-sigma uncertainty predictions actually covered ~68% of the results. Current work suggests that while CNN accuracy appears to be equal to or slightly worse than PyAutoLens, the combination of the two is significantly better than either one separately.

Summary

This project has so far achieved high accuracies for parameter estimation, on par with conventional fitting techniques, and with future training could even outperform such techniques. Regardless, the combination of CNNs with conventional parameter-fitting approaches is a promising new method that can achieve even better results.
Poster Title
Strong Lensing with Neural Networks
Tags
Astronomy
Astrophysics
Data Science
Url
james.pearson@nottingham.ac.uk