Abstract
A chatbot usability questionnaire (CUQ) was designed to measure the usa-bility of chatbots. Study objectives: 1) to test the construct validity of CUQ (i.e. does it differentiate between chatbots that we rank as having poor, average or good usability), 2) to assess the intra-rater reliability of CUQ (i.e. do participants provide the same answers/scores when assessing the usability of the same chatbots two weeks apart), and 3) to undertake ex-ploratory factor analysis to study the underlying factors that CUQ measures. Three chatbots were selected by co-authors that were regarded as having good, average and poor usability. Participants used each of the chat-bots and completed the CUQ scale for each. Participants repeated this pro-cess two weeks later to facilitate the measurement intra-rater variability. Paired t-tests were used to compare CUQ scores from each of the three chat-bots. Exploratory factor analysis was used to identify the factors within the CUQ. Paired t-tests and correlation was used to measure intra-rater reliabil-ity. There was a total of 156 CUQ survey completions (26 participants completed the CUQ for 3 different chatbots and for 2 rounds: 26*3*2 = 156). Intra-rater reliability was supported as there was a good correlation between how participants completed the CUQ for the same chatbot at ap-proximately two weeks apart (r>0.7). As a form of construct validity, the CUQ scores for each of the three chatbots were statistically significant (p<0.05). Factor analysis shows that the CUQ measures four factors 1) per-sonality, 2) user experience, 3) error handling and 4) onboarding of the chatbot.
Original language | English |
---|---|
Title of host publication | Design, User Experience, and Usability - 12th International Conference, DUXU 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings |
Subtitle of host publication | 12th International Conference, DUXU 2023 Held as Part of the 25th HCI International Conference, HCII 2023 Copenhagen, Denmark, July 23–28, 2023 Proceedings, Part IV |
Editors | Aaron Marcus, Elizabeth Rosenzweig, Marcelo M. Soares |
Pages | 321–339 |
Number of pages | 19 |
Volume | IV |
ISBN (Electronic) | 978-3-031-35708-4 |
DOIs | |
Publication status | Published (in print/issue) - 2023 |
Event | HCI International 2023 - Denmark, Copenhagen, Denmark Duration: 23 Jul 2023 → 28 Jul 2023 https://2023.hci.international/ |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 14033 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | HCI International 2023 |
---|---|
Abbreviated title | HCI 2023 |
Country/Territory | Denmark |
City | Copenhagen |
Period | 23/07/23 → 28/07/23 |
Internet address |
Bibliographical note
https://support.springer.com/en/support/solutions/articles/6000081233-electronic-offprint-sharingPublisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keywords
- Usability
- chatbots
- AI
- UX
- Conversational user interfaces
- testing
- Chatbots
- HCI design and evaluation methods
- User experience