Abstract
The advancement of technology and computer systems requires continuous enhancement by means of the interaction between users and computers. Subsequently, the availability of unobtrusive visualised input modalities, such as eye trackers and RGB cameras has enabled the viable detection of users’ emotions and cognitive states. However, how to utilise such pervasive input modalities to enable a computer to actively interact with a user based on their current emotional and cognitive states is a challenging problem. This paper presents a research study that is currently taking place at Ulster University, which seeks to investigate creating a user model that facilitates future creation of Adaptive Human Computer Interaction. In other words, this research project focuses on exploiting visual-based input modalities to perceive and infer a user’s emotional and cognitive states in order to predicate adaptation of the graphical user interface or indeed the user interaction.
Original language | English |
---|---|
Title of host publication | Unknown Host Publication |
Publisher | Association for Computing Machinery |
Number of pages | 4 |
Publication status | Accepted/In press - 20 Jun 2017 |
Event | British HCI Conference 2017, Doctoral Consortium - Sunderland, United Kingdom Duration: 20 Jun 2017 → … |
Conference
Conference | British HCI Conference 2017, Doctoral Consortium |
---|---|
Period | 20/06/17 → … |
Keywords
- User Modelling
- Adaptive Human-Computer Interaction
- Visual-Based
- Input Perception Modality