Towards Validating a Chatbot Usability Scale

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

A chatbot usability questionnaire (CUQ) was designed to measure the usa-bility of chatbots. Study objectives: 1) to test the construct validity of CUQ (i.e. does it differentiate between chatbots that we rank as having poor, average or good usability), 2) to assess the intra-rater reliability of CUQ (i.e. do participants provide the same answers/scores when assessing the usability of the same chatbots two weeks apart), and 3) to undertake ex-ploratory factor analysis to study the underlying factors that CUQ measures. Three chatbots were selected by co-authors that were regarded as having good, average and poor usability. Participants used each of the chat-bots and completed the CUQ scale for each. Participants repeated this pro-cess two weeks later to facilitate the measurement intra-rater variability. Paired t-tests were used to compare CUQ scores from each of the three chat-bots. Exploratory factor analysis was used to identify the factors within the CUQ. Paired t-tests and correlation was used to measure intra-rater reliabil-ity. There was a total of 156 CUQ survey completions (26 participants completed the CUQ for 3 different chatbots and for 2 rounds: 26*3*2 = 156). Intra-rater reliability was supported as there was a good correlation between how participants completed the CUQ for the same chatbot at ap-proximately two weeks apart (r>0.7). As a form of construct validity, the CUQ scores for each of the three chatbots were statistically significant (p<0.05). Factor analysis shows that the CUQ measures four factors 1) per-sonality, 2) user experience, 3) error handling and 4) onboarding of the chatbot.
Original languageEnglish
Title of host publicationDesign, User Experience, and Usability - 12th International Conference, DUXU 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings
Subtitle of host publication12th International Conference, DUXU 2023 Held as Part of the 25th HCI International Conference, HCII 2023 Copenhagen, Denmark, July 23–28, 2023 Proceedings, Part IV
EditorsAaron Marcus, Elizabeth Rosenzweig, Marcelo M. Soares
Pages321–339
Number of pages19
VolumeIV
ISBN (Electronic)978-3-031-35708-4
DOIs
Publication statusPublished (in print/issue) - 2023
EventHCI International 2023 - Denmark, Copenhagen, Denmark
Duration: 23 Jul 202328 Jul 2023
https://2023.hci.international/

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14033 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceHCI International 2023
Abbreviated titleHCI 2023
Country/TerritoryDenmark
CityCopenhagen
Period23/07/2328/07/23
Internet address

Bibliographical note

https://support.springer.com/en/support/solutions/articles/6000081233-electronic-offprint-sharing

Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Keywords

  • Usability
  • chatbots
  • AI
  • UX
  • Conversational user interfaces
  • testing
  • Chatbots
  • HCI design and evaluation methods
  • User experience

Fingerprint

Dive into the research topics of 'Towards Validating a Chatbot Usability Scale'. Together they form a unique fingerprint.

Cite this