System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis

Maciej Hyzy, RR Bond, Maurice Mulvenna, Lu Bai, Alan Dix, Simon Leigh, Sophie Hunt

Research output: Contribution to journalArticlepeer-review

44 Citations (Scopus)
156 Downloads (Pure)

Abstract

Background:
The System Usability Scale (SUS) is a widely used scale that has been used to quantify the usability of many software and hardware products. However, the SUS was not specifically designed to evaluate mobile apps, or in particular digital health apps (DHAs).

Objective:
The aim of this study was to examine whether the widely used SUS distribution for benchmarking (mean 68, SD 12.5) can be used to reliably assess the usability of DHAs.

Methods:
A search of the literature was performed using the ACM Digital Library, IEEE Xplore, CORE, PubMed, and Google Scholar databases to identify SUS scores related to the usability of DHAs for meta-analysis. This study included papers that published the SUS scores of the evaluated DHAs from 2011 to 2021 to get a 10-year representation. In total, 117 SUS scores for 114 DHAs were identified. R Studio and the R programming language were used to model the DHA SUS distribution, with a 1-sample, 2-tailed t test used to compare this distribution with the standard SUS distribution.

Results:
The mean SUS score when all the collected apps were included was 76.64 (SD 15.12); however, this distribution exhibited asymmetrical skewness (–0.52) and was not normally distributed according to Shapiro-Wilk test (P=.002). The mean SUS score for “physical activity” apps was 83.28 (SD 12.39) and drove the skewness. Hence, the mean SUS score for all collected apps excluding “physical activity” apps was 68.05 (SD 14.05). A 1-sample, 2-tailed t test indicated that this health app SUS distribution was not statistically significantly different from the standard SUS distribution (P=.98).

Conclusions:
This study concludes that the SUS and the widely accepted benchmark of a mean SUS score of 68 (SD 12.5) are suitable for evaluating the usability of DHAs. We speculate as to why physical activity apps received higher SUS scores than expected. A template for reporting mean SUS scores to facilitate meta-analysis is proposed, together with future work that could be done to further examine the SUS benchmark scores for DHAs.
Original languageEnglish
Article numbere37290
Pages (from-to)1-11
Number of pages11
JournalJMIR mHealth and uHealth
Volume10
Issue number8
DOIs
Publication statusPublished (in print/issue) - 18 Aug 2022

Bibliographical note

Funding Information:
This study was conducted as part of a doctoral Co-operative Awards in Science and Technology award, with funding from the Department for the Economy in Northern Ireland and the Organisation for the Review of Care and Health Applications in the United Kingdom.

Publisher Copyright:
© 2022 JMIR Publications. All rights reserved.

Keywords

  • digital health apps
  • data analysis
  • mHealth
  • digital health
  • usability
  • health apps
  • smart phones
  • quality Assurance
  • SUS for digital health
  • mHealth usability
  • mHealth SUS scores meta-analysis
  • digital health apps usability
  • mobile health
  • System Usability Scale
  • mobile app
  • SUS meta-analysis

Fingerprint

Dive into the research topics of 'System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis'. Together they form a unique fingerprint.

Cite this