Abstract
Language | English |
---|---|
Title of host publication | Unknown Host Publication |
Number of pages | 4 |
Publication status | Accepted/In press - 20 Jun 2017 |
Event | British HCI Conference 2017, Doctoral Consortium - Sunderland, United Kingdom Duration: 20 Jun 2017 → … |
Conference
Conference | British HCI Conference 2017, Doctoral Consortium |
---|---|
Period | 20/06/17 → … |
Fingerprint
Keywords
- User Modelling
- Adaptive Human-Computer Interaction
- Visual-Based
- Input Perception Modality
Cite this
}
User Interaction Modelling for Adaptive Human Computer Interaction. / Samara, Anas; Galway, Leo; Bond, Raymond; Wang, Hui.
Unknown Host Publication. 2017.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
TY - GEN
T1 - User Interaction Modelling for Adaptive Human Computer Interaction
AU - Samara, Anas
AU - Galway, Leo
AU - Bond, Raymond
AU - Wang, Hui
PY - 2017/6/20
Y1 - 2017/6/20
N2 - The advancement of technology and computer systems requires continuous enhancement by means of the interaction between users and computers. Subsequently, the availability of unobtrusive visualised input modalities, such as eye trackers and RGB cameras has enabled the viable detection of users’ emotions and cognitive states. However, how to utilise such pervasive input modalities to enable a computer to actively interact with a user based on their current emotional and cognitive states is a challenging problem. This paper presents a research study that is currently taking place at Ulster University, which seeks to investigate creating a user model that facilitates future creation of Adaptive Human Computer Interaction. In other words, this research project focuses on exploiting visual-based input modalities to perceive and infer a user’s emotional and cognitive states in order to predicate adaptation of the graphical user interface or indeed the user interaction.
AB - The advancement of technology and computer systems requires continuous enhancement by means of the interaction between users and computers. Subsequently, the availability of unobtrusive visualised input modalities, such as eye trackers and RGB cameras has enabled the viable detection of users’ emotions and cognitive states. However, how to utilise such pervasive input modalities to enable a computer to actively interact with a user based on their current emotional and cognitive states is a challenging problem. This paper presents a research study that is currently taking place at Ulster University, which seeks to investigate creating a user model that facilitates future creation of Adaptive Human Computer Interaction. In other words, this research project focuses on exploiting visual-based input modalities to perceive and infer a user’s emotional and cognitive states in order to predicate adaptation of the graphical user interface or indeed the user interaction.
KW - User Modelling
KW - Adaptive Human-Computer Interaction
KW - Visual-Based
KW - Input Perception Modality
M3 - Conference contribution
BT - Unknown Host Publication
ER -