4.7 Article

Hierarchical Inference of the Lensing Convergence from Photometric Catalogs with Bayesian Graph Neural Networks

期刊

ASTROPHYSICAL JOURNAL
卷 953, 期 2, 页码 -

出版社

IOP Publishing Ltd
DOI: 10.3847/1538-4357/acdc25

关键词

-

向作者/读者索取更多资源

This paper presents a Bayesian graph neural network (BGNN) for estimating weak lensing convergence (kappa) from photometric measurements of galaxies along a given line of sight (LOS). The method is particularly useful in strong gravitational time-delay cosmography (TDC), as it characterizes the external convergence (kappa (ext)) from the lens environment and LOS, which is necessary for precise inference of the Hubble constant (H (0)). The BGNN is trained on a large-scale simulation and evaluated on test sets with varying degrees of overlap with the training distribution. The results show that the BGNN accurately recovers the population mean of kappa in well-sampled test fields and extracts a stronger kappa signal compared to a traditional method in sparse sample regions. Therefore, the hierarchical inference pipeline using BGNNs has the potential to improve the characterization of external convergence (kappa (ext)) for precision TDC.
We present a Bayesian graph neural network (BGNN) that can estimate the weak lensing convergence (kappa) from photometric measurements of galaxies along a given line of sight (LOS). The method is of particular interest in strong gravitational time-delay cosmography (TDC), where characterizing the external convergence (kappa (ext)) from the lens environment and LOS is necessary for precise Hubble constant (H (0)) inference. Starting from a large-scale simulation with a kappa resolution of similar to 1', we introduce fluctuations on galaxy-galaxy lensing scales of similar to 1 and extract random sight lines to train our BGNN. We then evaluate the model on test sets with varying degrees of overlap with the training distribution. For each test set of 1000 sight lines, the BGNN infers the individual kappa posteriors, which we combine in a hierarchical Bayesian model to yield constraints on the hyperparameters governing the population. For a test field well sampled by the training set, the BGNN recovers the population mean of kappa precisely and without bias (within the 2 sigma credible interval), resulting in a contribution to the H (0) error budget well under 1%. In the tails of the training set with sparse samples, the BGNN, which can ingest all available information about each sight line, extracts a stronger kappa signal compared to a simplified version of the traditional method based on matching galaxy number counts, which is limited by sample variance. Our hierarchical inference pipeline using BGNNs promises to improve the kappa (ext) characterization for precision TDC. The code is available as a public Python package, Node to Joy .

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据