Combining Users' Activity Survey and Simulators to Evaluate Human Activity Recognition Systems

Gorka Azkune, Aitor Almeida, Diego López-de-Ipiña, Liming (Luke) Chen

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)


Evaluating human activity recognition systems usually implies following expensive and time-consuming methodologies, where experiments with humans are run with the consequent ethical and legal issues. We propose a novel evaluation methodology to overcome the enumerated problems, which is based on surveys for users and a synthetic dataset generator tool. Surveys allow capturing how different users perform activities of daily living, while the synthetic dataset generator is used to create properly labelled activity datasets modelled with the information extracted from surveys. Important aspects, such as sensor noise, varying time lapses and user erratic behaviour, can also be simulated using the tool. The proposed methodology is shown to have very important advantages that allow researchers to carry out their work more efficiently. To evaluate the approach, a synthetic dataset generated following the proposed methodology is compared to a real dataset computing the similarity between sensor occurrence frequencies. It is concluded that the similarity between both datasets is more than significant.
Original languageEnglish
Pages (from-to)8192-8213
Issue number4
Publication statusPublished (in print/issue) - Apr 2015


Dive into the research topics of 'Combining Users' Activity Survey and Simulators to Evaluate Human Activity Recognition Systems'. Together they form a unique fingerprint.

Cite this