Identification of multi-component LOFAR sources with multi-modal deep learning
View/ Open
Author
Alegre, Lara
Best, Philip
Sabater, Jose
Rottgering, Huub
Hardcastle, Martin
Williams, Wendy
Attention
2299/28370
Abstract
Modern high-sensitivity radio telescopes are discovering an increased number of resolved sources with intricate radio structures and fainter radio emissions. These sources often present a challenge because source detectors might identify them as separate radio sources rather than components belonging to the same physically connected radio source. Currently, there are no reliable automatic methods to determine which radio components are single radio sources or part of multi-component sources. We propose a deep learning classifier to identify those sources that are part of a multi-component system and require component association on data from the LOFAR Two-Metre Sky Survey (LoTSS). We combine different types of input data using multi-modal deep learning to extract spatial and local information about the radio source components: a convolutional neural network component that processes radio images is combined with a neural network component that uses parameters measured from the radio sources and their nearest neighbours. Our model retrieves 94 per cent of the sources with multiple components on a balanced test set with 2,683 sources and achieves almost 97 per cent accuracy in the real imbalanced data (323,103 sources). The approach holds potential for integration into pipelines for automatic radio component association and cross-identification. Our work demonstrates how deep learning can be used to integrate different types of data and create an effective solution for managing modern radio surveys.