4.8 Article

Deep Spatial-Temporal Model Based Cross-Scene Action Recognition Using Commodity WiFi

期刊

IEEE INTERNET OF THINGS JOURNAL
卷 7, 期 4, 页码 3592-3601

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2020.2973272

关键词

Feature extraction; Computational modeling; Deep learning; Logic gates; Wireless fidelity; OFDM; Activity recognition; Action recognition; bidirectional long short-term memory (Bi-LSTM); convolutional neural network (CNN); transfer learning

资金

  1. Key Program of the National Natural Science Foundation of China [61932013]
  2. National Natural Science Foundation of China [61803212]
  3. Natural Science Foundation of Jiangsu Province [BK20180744]
  4. National Science Foundation of the Jiangsu Higher Education Institutions of China [18KJB520034]
  5. China Postdoctoral Science Foundation [2019M651920]

向作者/读者索取更多资源

With the popularization of Internet-of-Things (IoT) systems, passive action recognition on channel state information (CSI) has attracted much attention. Most conventional work under the machine-learning framework utilizes handcrafted features (e.g., statistic features) that are unable to sufficiently describe the sequence data and heavily rely on designers' experiences. Therefore, how to automatically learn abundant spatial-temporal information from CSI data is a topic worthy of study. In this article, we propose a deep learning framework that integrates spatial features learned from the convolutional neural network (CNN) into the temporal model multilayer bidirectional long short-term memory (Bi-LSTM). Specifically, CSI streams are segmented into a series of patches, from which spatial features are extracted by our designed CNN structure. Considering long-term dependencies between adjacent sequences, the fully connected layer of CNN for each patch is taken as the Bi-LSTM sequential input to further capture temporal features. Our model is appealing in that it can simultaneously learn temporal dynamics and convolutional perceptual representations. To the best of our knowledge, this is the first work to explore deep spatial-temporal features for CSI-based action recognition. Furthermore, in order to solve the problem that the trained model fully fails with environmental changes, we use the off-the-shelf model as the pretrained model and fine-tune it in the new scenario. The transfer method is able to realize cross-scene action recognition with low computational consumption and satisfactory accuracy. We carry out experiments on indoor data and the experimental results validate the effectiveness of our algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据