4.1 Article

Reducing algorithm complexity for computing an aggregate uncertainty measure

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSMCA.2007.893457

Keywords

aggregate uncertainty (AU); computational; complexity; Dempster-Shafer (D-S) theory

Ask authors/readers for more resources

In the theory of evidence, two kinds of uncertainty coexist, nonspecificity and discord. An aggregate uncertainty (AU) measure has been defined to include these two kinds of uncertainty, in an aggregate fashion. Meyerowitz et al proposed an algorithm for calculating AU and validated its practical usage. Although this algorithm was proven to be absolutely correct by Klir and Wierman, in some cases, it remains too complex. In fact, when the cardinality of the frame of discernment is very large, it can be impossible to calculate AU. Therefore, based on Klir's and Harmanec's seminal work, we give some justifications for restricting the computation of AU(Bel) to the core of the corresponding belief function, and we also propose an algorithm to calculate AU(Bel), the F-algorithm, which reduces the computational complexity of the original algorithm of Meyerowitz et al We prove that this algorithm gives the same results as Meyerowitz's algorithm, and we outline conditions under which it reduces the computational complexity significantly. Moreover, we illustrate the use of the T-algorithm in computing AU in a practical scenario of target identification.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available