4.7 Article

Holder Divergence-Based Reward Function for Poisson RFSs and Application to Multitarget Sensor Management

Journal

IEEE SENSORS JOURNAL
Volume 23, Issue 9, Pages 9999-10008

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2023.3255987

Keywords

Sensors; Sensor systems; Radio frequency; Target tracking; Linear programming; Uncertainty; Information theory; Information fusion; random finite sets (RFSs); sensor management; statistical divergences

Ask authors/readers for more resources

In this study, a novel information theoretic reward function based on statistical Holder divergence (HD) is proposed. The Holder divergence, a generalization of the Cauchy-Schwarz divergence (CSD), is extended to finite set statistics (FISST) densities for multiobject applications. Analytic expressions for the extended Holder divergence (EHD) are derived for the case of Poisson RFSs, and it is applied to the PHD filter in an SMC implementation. The proposed reward function is found to outperform other similar reward functions in multitarget sensor management literature based on OSPA metric, and it can be adapted to different situations.
In this study, we propose a novel information theoretic reward function based on the statistical Holder divergence (HD). The Holder divergence is the generalization of Cauchy-Schwarz divergence (CSD). We extended Holder divergence to the finite set statistics (FISST) densities, thus making it possible to use in the multiobject applications based on the random finite set (RFS) theory. We derive the analytic expressions for the extended Holder divergence (EHD) for the case when the multitarget densities have the form of Poisson RFSs and apply it to the probability hypothesis density (PHD) filter in a sequential Monte Carlo (SMC) implementation. We evaluated the performance of the proposed reward function in a multitarget sensor management problem where the next position of a moving observer is decided according to the value of the EHD-based reward function. The performance of the algorithm is compared against, in terms of the optimal subpattern assignment (OSPA) metric, and it is shown that the proposed reward function is superior to other similar reward functions in multitarget sensor management literature. We also show that it is possible to adapt the management algorithm to different situations, such as static and dynamic environments.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available