4.6 Review

Robustness of AI-based prognostic and systems health management

Journal

ANNUAL REVIEWS IN CONTROL
Volume 51, Issue -, Pages 130-152

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.arcontrol.2021.04.001

Keywords

Prognostics and system health management; Robust AI; Machine learning; PHM; Fault diagnosis

Ask authors/readers for more resources

Prognostic and systems Health Management (PHM) is crucial for solving reliability issues due to system complexities. The rapid advancement of AI technologies has allowed data-driven methods to be applied in constructing process models, but challenges arise in testing and evaluating model errors, unknown phenomena, and robustness in safety-critical applications amidst increasing system complexity and connectivity.
Prognostic and systems Health Management (PHM) is an integral part of a system. It is used for solving reliability problems that often manifest due to complexities in design, manufacturing, operating environment and system maintenance. For safety-critical applications, using a model-based development process for complex systems might not always be ideal but it is equally important to establish the robustness of the solution. The information revolution has allowed data-driven methods to diffuse within this field to construct the requisite process (or system models) to cope with the so-called big data phenomenon. This is supported by large datasets that help machine-learning models achieve impressive accuracy. AI technologies are now being integrated into many PHM related applications including aerospace, automotive, medical robots and even autonomous weapon systems. However, with such rapid growth in complexity and connectivity, a systems' behaviour is influenced in unforeseen ways by cyberattacks, human errors, working with incorrect or incomplete models and even adversarial phenomena. Many of these models depend on the training data and how well the data represents the test data. These issues require fine-tuning and even retraining the models when there is even a small change in operating conditions or equipment. Yet, there is still ambiguity associated with their implementation, even if the learning algorithms classify accordingly. Uncertainties can lie in any part of the AI-based PHM model, including in the requirements, assumptions, or even in the data used for training and validation. These factors lead to sub-optimal solutions with an open interpretation as to why the requirements have not been met. This warrants the need for achieving a level of robustness in the implemented PHM, which is a challenging task in a machine learning solution. This article aims to present a framework for testing the robustness of AI-based PHM. It reviews some key milestones achieved in the AI research community to deal with three particular issues relevant for AI-based PHM in safety-critical applications: robustness to model errors, robustness to unknown phenomena and empirical evaluation of robustness during deployment. To deal with model errors, many techniques from probabilistic inference and robust optimisation are often used to provide some robustness guarantee metric. In the case of unknown phenomena, techniques include anomaly detection methods, using causal models, the construction of ensembles and reinforcement learning. It elicits from the authors' work on fault diagnostics and robust optimisation via machine learning techniques to offer guidelines to the PHM research community. Finally, challenges and future directions are also examined; on how to better cope with any uncertainties as they appear during the operating life of an asset.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available