Abstract
While deep learning models have contributed to the advancement of sensor-based Human Activity Recognition (HAR), it usually requires large amounts of annotated sensor data to extract robust features. To alleviate the limitations of data annotation, contrastive learning has been applied to sensor-based HAR. One of the essential factors of contrastive learning is data augmentation, significantly impacting the performance of pre-training. However, current popular augmentation methods do not achieve competitive performance in contrastive learning for sensor-based HAR. Motivated by this issue, we propose a new sensor data augmentation method by resampling, which introduces variable domain information and simulates realistic activity data by varying the sampling frequency to maximize the coverage of the sampling space. The resampling augmentation method was evaluated in supervised learning and contrastive learning (SimCLRHAR and MoCoHAR). In the experiment, we use four datasets, UCI-HAR, MotionSense, USC-HAD, and MobiAct, using the mean F1-score as the evaluation metric for downstream tasks. The experiment results show that the resampling data augmentation outperforms all state-of-the-art augmentation methods in supervised learning and contrastive learning with a small amount of labeled data. The results also demonstrate that not all data augmentation methods have positive effects in contrastive learning frameworks.
Original language | English |
---|---|
Pages (from-to) | 22994 |
Number of pages | 15 |
Journal | IEEE Sensors Journal |
Volume | 22 |
Issue number | 23 |
Early online date | 19 Oct 2022 |
DOIs | |
Publication status | Published (in print/issue) - 1 Dec 2022 |
Bibliographical note
Publisher Copyright:© 2001-2012 IEEE.
Keywords
- Contrastive Learning
- Human Activity Recognition
- Resampling
- Sensor Data Augmentation
- Wearable Sensors
- human activity recognition (HAR)
- sensor data augmentation
- Contrastive learning
- wearable sensors
- resampling