4.7 Article

Developing a victorious strategy to the second strong gravitational lensing data challenge

Journal

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY
Volume 515, Issue 4, Pages 5121-5134

Publisher

OXFORD UNIV PRESS
DOI: 10.1093/mnras/stac2047

Keywords

gravitational lensing: strong; methods: numerical; techniques: image processing

Funding

  1. CNPq [316072/2021-4]
  2. FAPERJ [201.456/2022, 433615/2018-4, 314672/2020-6]

Ask authors/readers for more resources

This paper presents the deep learning strategies and methodology used to design the highest scoring algorithm in the second Strong Gravitational Lensing Challenge (SGLC). The study discusses the approach of using a network with two branches to handle images of different resolutions, as well as the optimization process. The research is significant for efficient, adaptable, and accurate analyses of strong lenses using deep learning frameworks.
Strong lensing is a powerful probe of the matter distribution in galaxies and clusters and a relevant tool for cosmography. Analyses of strong gravitational lenses with deep learning have become a popular approach due to these astronomical objects' rarity and image complexity. Next-generation surveys will provide more opportunities to derive science from these objects and an increasing data volume to be analysed. However, finding strong lenses is challenging, as their number densities are orders of magnitude below those of galaxies. Therefore, specific strong lensing search algorithms are required to discover the highest number of systems possible with high purity and low false alarm rate. The need for better algorithms has prompted the development of an open community data science competition named strong gravitational lensing challenge (SGLC). This work presents the deep learning strategies and methodology used to design the highest scoring algorithm in the second SGLC (II SGLC). We discuss the approach used for this data set, the choice of a suitable architecture, particularly the use of a network with two branches to work with images in different resolutions, and its optimization. We also discuss the detectability limit, the lessons learned, and prospects for defining a tailor-made architecture in a survey in contrast to a general one. Finally, we release the models and discuss the best choice to easily adapt the model to a data set representing a survey with a different instrument. This work helps to take a step towards efficient, adaptable, and accurate analyses of strong lenses with deep learning frameworks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available