In this paper, a crack diagnosis framework is proposed that combines a new signal-to-imaging technique and transfer learning-aided deep learning framework to automate the diagnostic process. The objective of the signal-to-imaging technique is to convert one-dimensional (1D) acoustic emission (AE) signals from multiple sensors into a two-dimensional (2D) image to capture information under variable operating conditions. In this process, a short-time Fourier transform (STFT) is first applied to the AE signal of each sensor, and the STFT results from the different sensors are then fused to obtain a condition-invariant 2D image of cracks; this scheme is denoted as Multi-Sensors Fusion-based Time-Frequency Imaging (MSFTFI). The MSFTFI images are subsequently fed to the fine-tuned transfer learning (FTL) model built on a convolutional neural network (CNN) framework for diagnosing crack types. The proposed diagnostic scheme (MSFTFI + FTL) is tested with a standard AE dataset collected from a self-designed spherical tank to validate the performance under variable pressure conditions. The results suggest that the proposed strategy significantly outperformed classical methods with average performance improvements of 2.36–20.26%.
|Early online date||22 Sep 2020|
|Publication status||Published (in print/issue) - 15 Jan 2021|