Semi-automation of gesture annotation by machine learning and human collaboration

Naoto Ienaga, Alice Cravotta, Kei Terayama, Bryan Scotney, Hideo Saito, M. Grazia Busa

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)
71 Downloads (Pure)

Abstract

Gesture and multimodal communication researchers typically annotate video data manually, even though this can be a very time-consuming task. In the present work, a method to detect gestures is proposed as a fundamental step towards a semi-automatic gesture annotation tool. The proposed method can be applied to RGB videos and requires annotations of part of a video as input. The technique deploys a pose estimation method and active learning. In the experiment, it is shown that if about 27% of the video is annotated, the remaining parts of the video can be annotated automatically with an F-score of at least 0.85. Users can run this tool with a small number of annotations first. If the predicted annotations for the remainder of the video are not satisfactory, users can add further annotations and run the tool again. The code has been released so that other researchers and practitioners can use the results of this research. This tool has been confirmed to work in conjunction with ELAN.
Original languageEnglish
Pages (from-to)673-700
Number of pages28
JournalLanguage Resources and Evaluation
Volume56
Issue number3
Early online date25 Feb 2022
DOIs
Publication statusPublished online - 25 Feb 2022

Bibliographical note

Funding Information:
Funding was provided by Japan Society for the Promotion of Science (Grant No. 17J05489).

Publisher Copyright:
© 2022, The Author(s).

Keywords

  • Gesture detection
  • Machine learning
  • Active learning
  • Video annotation

Fingerprint

Dive into the research topics of 'Semi-automation of gesture annotation by machine learning and human collaboration'. Together they form a unique fingerprint.

Cite this