4.7 Article

Federated explainable artificial intelligence (fXAI): a digital manufacturing perspective

Related references

Note: Only part of the references are listed.
Article Engineering, Electrical & Electronic

XAI-3DP: Diagnosis and Understanding Faults of 3-D Printer With Explainable Ensemble AI

Deepraj Chowdhury et al.

Summary: This study proposes a data-driven approach for diagnosing faults in 3D printers. By collecting data for three scenarios - healthy condition, bed failure, and arm failure - an ensemble learning model of Random Forest and XGBoost achieves an accuracy of 99.75%. The interpretation of the model is improved using the Shapley additive explanations library.

IEEE SENSORS LETTERS (2023)

Article Engineering, Industrial

Predictive models in digital manufacturing: research, applications, and future outlook

Andrew Kusiak

Summary: Data is becoming increasingly valuable in manufacturing, with data-driven applications serving as strong differentiators for enterprises. A widely accepted framework is necessary to guide digitisation. In the absence of such a framework, this paper introduces an example framework to capture the components of a digital enterprise. The complexity of manufacturing systems has increased due to the adoption of new technology and software solutions, with more frequent product introductions and variable demands. The digital space allows for optimization and simulation of decisions before implementation in the physical space, with predictive modeling playing a valuable role. This paper identifies three challenges in predictive modeling - model complexity, model interpretability, and model reuse - and illustrates their coverage in recent literature. Eight observations are presented to guide future research in digital manufacturing.

INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH (2023)

Review Computer Science, Interdisciplinary Applications

Health condition monitoring of a complex hydraulic system using Deep Neural Network and DeepSHAP explainable XAI

Aurelien Teguede Keleko et al.

Summary: This paper presents a detailed framework for Condition Monitoring (CM) based on hydraulic systems and multi-sensor data. The data-driven approach using Deep Neural Networks (DNN) is used to predict the real operating states of the system. The DeepSHAP methodology is employed to explain the importance of each sensor and enhance the trustworthiness of the DNN model's results. The framework shows that the DNN classifier model performs efficiently and the DeepSHAP technique helps humans to understand and interpret the model's results.

ADVANCES IN ENGINEERING SOFTWARE (2023)

Article Computer Science, Interdisciplinary Applications

Interpreting learning models in manufacturing processes: Towards explainable AI methods to improve trust in classifier predictions

Claudia Goldman et al.

Summary: Smart manufacturing processes can benefit from the use of explainable AI methods, which can reduce testing and validation time and improve user trust in the models' outputs. This paper applies these methods to ultrasonic weld quality prediction and body-in-white dimensional variability reduction, demonstrating their importance in the manufacturing industry.

JOURNAL OF INDUSTRIAL INFORMATION INTEGRATION (2023)

Article Engineering, Multidisciplinary

Explainable artificial intelligence for education and training

Krzysztof Fiok et al.

Summary: Researchers and software users are benefiting from the rapid growth of artificial intelligence (AI) in various domains, but are also realizing the need to understand and address the risks and limitations associated with AI. Explainable AI (XAI) methods are designed to mitigate risks and introduce trust into human-AI interactions.

JOURNAL OF DEFENSE MODELING AND SIMULATION-APPLICATIONS METHODOLOGY TECHNOLOGY-JDMS (2022)

Article Automation & Control Systems

From Artificial Intelligence to Explainable Artificial Intelligence in Industry 4.0: A Survey on What, How, and Where

Imran Ahmed et al.

Summary: This article provides a comprehensive survey of AI and XAI-based methods in the context of Industry 4.0. It discusses the technologies enabling Industry 4.0, investigates the main methods used in the literature, and addresses the future research directions and the importance of responsible and human-centric AI and XAI systems.

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS (2022)

Article Computer Science, Software Engineering

Towards Visual Explainable Active Learning for Zero-Shot Classification

Shichao Jia et al.

Summary: This paper proposes a visual explainable active learning approach called semantic navigator to solve the problems in zero-shot classification. The approach promotes human-AI teaming and improves the efficiency of building zero-shot classification models. User studies have validated the effectiveness of this method.

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS (2022)

Review Engineering, Industrial

Explainable neural network-based approach to Kano categorisation of product features from online reviews

Junegak Joung et al.

Summary: The paper introduces a neural network-based approach to classify product features into Kano categories using online reviews, demonstrating higher reliability and efficiency in Kano analysis.

INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH (2022)

Article Engineering, Industrial

From digital to universal manufacturing

Andrew Kusiak

Summary: The transformation of the manufacturing industry in the past two decades has been largely driven by data, with digitisation making its mark in various aspects of manufacturing. This paper highlights the evolution towards universal manufacturing and discusses the representation of enterprises in the universal manufacturing cloud. The paper also defines product- and process-based specifications of digital enterprises and proposes an enterprise configuration algorithm for selecting component models, using different representations of digital component models and applying an extended topological sorting algorithm to construct an integrated digital model.

INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH (2022)

Article Computer Science, Information Systems

Recent Advances on Federated Learning for Cybersecurity and Cybersecurity for Federated Learning for Internet of Things

Bimal Ghimire et al.

Summary: This article discusses the decentralized paradigm in the field of cybersecurity and machine learning for the emerging Internet of Things (IoT). It highlights the concept of federated cybersecurity (FC) and the application of federated learning (FL) in securing the IoT environment. The article also explores the performance issues and future research trends in this area.

IEEE INTERNET OF THINGS JOURNAL (2022)

Article Computer Science, Artificial Intelligence

Explainable AI for domain experts: a post Hoc analysis of deep learning for defect classification of TFT-LCD panels

Minyoung Lee et al.

Summary: In this study, explainable artificial intelligence techniques were used to analyze the predicted results of a DL model for defect image data, producing human-understandable results through visualization and conversion of prediction results. This approach provided domain experts with reliability and interpretability regarding defect classification.

JOURNAL OF INTELLIGENT MANUFACTURING (2022)

Article Computer Science, Artificial Intelligence

XMAP: eXplainable mapping analytical process

Su Nguyen et al.

Summary: This paper proposes a new approach called XMAP for developing AI systems that can provide accuracy and explanations. XMAP is highly modularised and provides interpretability for each step, achieving competitive predictive performance in classification tasks.

COMPLEX & INTELLIGENT SYSTEMS (2022)

Article Engineering, Industrial

Explainable reinforcement learning in production control of job shop manufacturing system

Andreas Kuhnle et al.

Summary: In the age of Industry 4.0, manufacturing is characterized by high product variety and complex material flows, requiring adaptive production planning systems. This paper investigates methods of explainable reinforcement learning in production control, presenting an approach that combines high prediction accuracy and explainability to generate understandable control strategies. The results are demonstrated on a real-world system from semiconductor manufacturing in a simulated approach.

INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH (2022)

Article Automation & Control Systems

Federated Transfer Learning Based Cross-Domain Prediction for Smart Manufacturing

Kevin I-Kai Wang et al.

Summary: In this article, a new federated transfer learning framework is proposed to address the challenges of data scarcity and data privacy in smart manufacturing. The framework allows model sharing across the central server and smart devices, achieving efficient and accurate learning using transfer learning techniques.

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS (2022)

Article Computer Science, Artificial Intelligence

Failure prediction in production line based on federated learning: an empirical study

Ning Ge et al.

Summary: This study compares the application effects of federated learning (FL) and centralized learning (CL) in the field of intelligent manufacturing, and finds that they perform similarly in terms of failure prediction, proving that FL can replace CL for failure prediction.

JOURNAL OF INTELLIGENT MANUFACTURING (2022)

Review Computer Science, Hardware & Architecture

Fusion of Federated Learning and Industrial Internet of Things: A survey

Parimala Boobalan et al.

Summary: The Industrial Internet of Things (IIoT) has revolutionized the concept of Industry 4.0 and paved the way for a new industrial era. Researchers are now implementing Federated Learning (FL) technology in IIoT to ensure safe, accurate, robust, and unbiased models. This integration addresses privacy concerns in storing and communicating data, by only transmitting encrypted notifications and parameters to the central server.

COMPUTER NETWORKS (2022)

Review Computer Science, Hardware & Architecture

A review of visualisation-as-explanation techniques for convolutional neural networks and their evaluation

Elhassan Mohamed et al.

Summary: This article highlights the importance of visualization techniques in artificial intelligence systems and emphasizes the role of XAI techniques in explaining intelligent system decisions. By discussing in detail the explanation methods and evaluation techniques for CNNs, we recognize the importance of XAI techniques in enhancing model performance and confidence.

DISPLAYS (2022)

Review Computer Science, Information Systems

Federated learning review: Fundamentals, enabling technologies, and future applications

Syreen Banabilah et al.

Summary: This study provides a comprehensive review of the current status and future trends of federated learning in both technical and market domains, serving as a reference point for researchers and practitioners to explore the applications of federated learning in various fields.

INFORMATION PROCESSING & MANAGEMENT (2022)

Proceedings Paper Computer Science, Artificial Intelligence

Roll Wear Prediction in Strip Cold Rolling with Physics-Informed Autoencoder and Counterfactual Explanations

Jakub Jakubowski et al.

Summary: The development of predictive maintenance solutions is a significant challenge in the industry. This paper introduces a hybrid model that combines physics knowledge with artificial intelligence to learn the degradation process of work rolls in the coldrolling process. The results show that this model can distinguish between different wear observations and provide predictive explanations through counterfactuals.

2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA) (2022)

Proceedings Paper Computer Science, Hardware & Architecture

Wafer Defect Pattern Classification with Explainable Decision Tree Technique

Ken Chau-Cheung Cheng et al.

Summary: This paper proposes a rule-based method for LDP classification, which can identify explainable patterns. Experimental results show that the proposed method achieves roughly the same level of accuracy as other ML methods while the execution time is much faster.

2022 IEEE INTERNATIONAL TEST CONFERENCE (ITC) (2022)

Proceedings Paper Automation & Control Systems

Explainable Anomaly Detection for Industrial Control System Cybersecurity

Do Thu Ha et al.

Summary: Industrial Control Systems (ICSs) play a crucial role in smart manufacturing, but data security is a major concern. Enhancing the detection model with Explainable Artificial Intelligence is essential for anomaly detection and preventing network security intrusions and system attacks.

IFAC PAPERSONLINE (2022)

Proceedings Paper Engineering, Industrial

Information Model to Advance Explainable AI-Based Decision Support Systems in Manufacturing System Design

David S. Cochran et al.

Summary: Artificial intelligence is increasingly used in various areas of production, such as industrial robotics, quality inspection, and cognitive support for employees. This paper proposes a method of designing explainable artificial intelligence decision support systems using information models and discusses how to meet customer requirements and use cases. The information model aims to provide transparent decision descriptions and alternatives to improve the design of manufacturing systems and dynamically adapt to changes in the market and production environment.

MANAGING AND IMPLEMENTING THE DIGITAL TRANSFORMATION, ISIEA 2022 (2022)

Proceedings Paper Automation & Control Systems

Explainable Forecasts of Disruptive Events using Recurrent Neural Networks

Anna L. Buczak et al.

Summary: This paper presents the Crystal Cube method for forecasting disruptive events worldwide, focusing on Irregular Leadership Change. The method utilizes a Recurrent Neural Network with Long-Short Term Memory units and emphasizes the explanation of network forecasts. SHapley Additive exPlanations is used for individual forecast explanations, and the method can be extended to Deep Reinforcement Learning models for self-driving cars or unmanned fighter jets.

2022 IEEE INTERNATIONAL CONFERENCE ON ASSURED AUTONOMY (ICAA 2022) (2022)

Review Computer Science, Artificial Intelligence

Human-centered explainability for life sciences, healthcare, and medical informatics

Sanjoy Dey et al.

Summary: The rapid development of artificial intelligence (AI) and the availability of biological, medical, and healthcare data have led to the development of various models. However, the lack of transparency in deep AI models has attracted criticism and hindered their adoption in clinical practice. Efforts have been made to improve interpretability in AI (XAI), but concerns about fairness and robustness have limited real-world applications. This article discusses how user-driven XAI can be more useful for different healthcare stakeholders and provides examples of XAI approaches that address their needs.

PATTERNS (2022)

Article Computer Science, Information Systems

Visual Interpretation of CNN Prediction Through Layerwise Sequential Selection of Discernible Neurons

Md Tauhid Bin Iqbal et al.

Summary: In this study, a new posthoc visual interpretation technique is proposed to identify discriminative image regions contributing highly towards networks' prediction. By exploring the layer-to-layer connected structure of neurons and obtaining contributions from adjacent layers, reliable and credible discernible neurons per layer are selected.

IEEE ACCESS (2022)

Article Engineering, Aerospace

Explainable Deep Reinforcement Learning for UAV autonomous path planning

Lei He et al.

Summary: This paper proposes a novel explainable deep neural network-based path planner for autonomous flight of quadrotors in unknown environments, trained using Deep Reinforcement Learning method in simulation. A model explanation method based on feature attribution is introduced to provide easy-to-interpret textual and visual explanations for end-users to understand the behavior triggers.

AEROSPACE SCIENCE AND TECHNOLOGY (2021)

Article Computer Science, Interdisciplinary Applications

The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies

Aniek F. Markus et al.

Summary: This paper discusses the issue of explainable AI in the healthcare domain, proposes a framework for choosing explainable AI methods, and highlights the lack of evaluation metrics in some aspects.

JOURNAL OF BIOMEDICAL INFORMATICS (2021)

Article Computer Science, Artificial Intelligence

Federated learning on non-IID data: A survey

Hangyu Zhu et al.

Summary: This survey analyzes the impact of Non-IID data on machine learning models in federated learning, reviews current research on handling these challenges, discusses the advantages and disadvantages of these approaches, and suggests future research directions.

NEUROCOMPUTING (2021)

Article Computer Science, Artificial Intelligence

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Andreas Holzinger et al.

Summary: AI excels in certain tasks but humans excel at multi-modal thinking and building self-explanatory systems. The medical domain highlights the importance of various modalities contributing to one result. Using conceptual knowledge to guide model training can lead to more explainable, robust, and less biased machine learning models.

INFORMATION FUSION (2021)

Article Computer Science, Artificial Intelligence

Ready, Steady, Go AI: A practical tutorial on fundamentals of artificial intelligence and its applications in phenomics image analysis

Farid Nakhle et al.

Summary: High-throughput image-based technologies are being widely used in digital phenomics, with artificial intelligence playing a crucial role in turning vast data into valuable predictions. Specialized programming skills and a deep understanding of machine learning algorithms are necessary for utilizing these technologies. This tutorial systematically reviews tools, technologies, and services available for phenomics data analysis in explainable AI applications.

PATTERNS (2021)

Article Computer Science, Artificial Intelligence

Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

Alejandro Barredo Arrieta et al.

INFORMATION FUSION (2020)

Proceedings Paper Computer Science, Information Systems

Explainable AI in Manufacturing: A Predictive Maintenance Case Study

Bahrudin Hrnjica et al.

ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: TOWARDS SMART AND DIGITAL MANUFACTURING, PT II (2020)

Article Computer Science, Information Systems

Learning Explainable Decision Rules via Maximum Satisfiability

Henrik E. C. Cao et al.

IEEE ACCESS (2020)

Article Computer Science, Information Systems

Explainable Machine Learning for Scientific Insights and Discoveries

Ribana Roscher et al.

IEEE ACCESS (2020)

Article Engineering, Industrial

Optimising product configurations with a data-mining approach

Z. Song et al.

INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH (2009)

Article Computer Science, Artificial Intelligence

Planning product configurations based on sales data

Andrew Kusiak et al.

IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS (2007)

Article Automation & Control Systems

Data mining of printed-circuit board defects

A Kusiak et al.

IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION (2001)

Article Engineering, Manufacturing

Rough set theory: A data mining tool for semiconductor manufacturing

A Kusiak

IEEE TRANSACTIONS ON ELECTRONICS PACKAGING MANUFACTURING (2001)