Identification of multicomponent LOFAR sources with multimodal deep learning
View/ Open
Author
Alegre, Lara
Best, Philip
Sabater, Jose
Röttgering, Huub
Hardcastle, Martin J
Williams, Wendy L
Attention
2299/28306
Abstract
Modern high-sensitivity radio telescopes are discovering an increased number of resolved sources with intricate radio structures and fainter radio emissions. These sources often present a challenge because source detectors might identify them as separate radio sources rather than components belonging to the same physically connected radio source. Currently, there are no reliable automatic methods to determine which radio components are single radio sources or part of multicomponent sources. We propose a deep-learning classifier to identify those sources that are part of a multicomponent system and require component association on data from the LOFAR Two-Metre Sky Survey. We combine different types of input data using multimodal deep learning to extract spatial and local information about the radio source components: a convolutional neural network component that processes radio images is combined with a neural network component that uses parameters measured from the radio sources and their nearest neighbours. Our model retrieves 94 per cent of the sources with multiple components on a balanced test set with 2683 sources and achieves almost 97 per cent accuracy in the real imbalanced data (323 103 sources). The approach holds potential for integration into pipelines for automatic radio component association and cross-identification. Our work demonstrates how deep learning can be used to integrate different types of data and create an effective solution for managing modern radio surveys.