Abstract
Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.
Original language | English |
---|---|
Title of host publication | Unknown Host Publication |
Publisher | Springer |
Number of pages | 6 |
Publication status | Accepted/In press - 15 Jun 2017 |
Event | 11th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI) 2017 - Villanova University, Philadelphia (Pennsylvania, USA) Duration: 15 Jun 2017 → … |
Conference
Conference | 11th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI) 2017 |
---|---|
Period | 15/06/17 → … |
Keywords
- Task Classification
- Visual-Based Input Modalities
- Intelligent HCI