4.5 Article

Structural-damage detection with big data using parallel computing based on MPSoC

Journal

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s13042-015-0453-3

Keywords

Damage detection; Big data; Finite element method; Monte-Carlo simulation; Artificial neural network; Parallel computing; Single instruction multiple data; Multiprocessor systems on chips

Ask authors/readers for more resources

Life of many people and vital social activities depend on good functioning of civil structures such as nuclear power plants, large bridges, pipelines, and others especially during and after high winds, earthquakes or environmental changes. We can recognize the structural damage of existing systems through signals of the stiffness reductions of their components, and changes in their observed displacements under certain load or environmental conditions. It can be formulated as an inverse problem and solved by the learning approach using neural networks, NN. Here, network function is referred as an associative memory device capable of satisfactory diagnostics even in the presence of noisy or incomplete measurements. It is not new idea, however, to obtain, in real time; an advanced warning of the onset of durability/structural problems at a stage when preventative action is possible has emerged creating an exciting new field within civil engineering. It refers to the broad concept of assessing the ongoing in-service performance of structures using a variety of measurement techniques, e.g., global position system-real time kinematic, GPS-RTK, wireless sensor networks, WSN, and others. It leads to, in this case, the so called big data. Big data is a term applied to data sets whose size is beyond the ability of commonly used traditional computer software and hardware to undertake their acquisition, access, and analytics in a reasonable amount of time. To solve this problem, we can use the Hadoop like Batch processing system using a distributed parallel processing paradigm called MapReduce, which like system handles the volume and variety part of big data. MapReduce'' is a programming model of Google and an associated implementation for processing and generating large data sets using a group of machines. Despite continuous data (BigData) originated from GPS-RTK and WSN is not studied in this paper; only one selected data is used to determine technical state of structure in real time. However, bigdata approach is used in this paper to solve the following problems: Firstly, small number of available data for neural network training may lead to strange, not practical solutions, which are beyond the permissible area. To overcome this problem, small available data is supplemented by numerical data generated from integration of Monte Carlo simulation and finite element method-MCSFEM model. Also, the learning approach using neural networks, NN, gives effect which is guaranteed only if this network has, among the various required conditions, a complete set of training data and so good (optimal) architecture of NN. Integration of a Monte-Carlo simulation with NN-MCSNN model in order to find the optimal (global) architecture of NN is selected and formulated to solve this problem. Both MCSFEM model and MCSNN model require expensive computing, which lead to necessity of the use of parallel computing based on Multi-ProcessorSystem on Chip, MPSoC, architecture. Secondly, the use of MPSoC architecture led to the emergence of multiple NN named net of neural network, NoNN, rather than a single NN. The simple approach trial and error'' for a single neural network is insufficient to determine an optimal architecture of NN from NoNN. To deal with this problem we develop, in this paper, a new distributed parallel processing using the Master-Slaver'' structure, in the MapReduce'' framework rather than the Google's Hadoop like Batch processing system using a distributed parallel processing paradigm called MapReduce. Our method is based on the single instruction multiple data'', SIMD, technology using computer with multiprocessor systems on a chip (or computer with multiple cores, CPUs) without necessarily Google's computer program and a group of machines. It is used for both MCSFEM model and MCSNN model in order to generate virtual dataset and to find out the optimal architecture of NN from NoNN. It enables us to create quickly many numerical data and eighty thousand architectures (using computer with eight CPUs) can be created instead of eight thousand (using computer with one CPU) only in the same computation times. Effect obtained is presented in the numerical example for the local damage detection in real time of the plane steel truss structures so that one can react quickly when problems appear or to predict new trends in the near future.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available