Human-Computer Interaction Task Classification via Visual-Based Input Modalities

A Samara, L Galway, R Bond, H Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.
LanguageEnglish
Title of host publicationUnknown Host Publication
Number of pages6
Publication statusAccepted/In press - 15 Jun 2017
Event11th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI) 2017 - Villanova University, Philadelphia (Pennsylvania, USA)
Duration: 15 Jun 2017 → …

Conference

Conference11th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI) 2017
Period15/06/17 → …

Fingerprint

Human computer interaction
Human engineering
Support vector machines
Classifiers
Experiments

Keywords

  • Task Classification
  • Visual-Based Input Modalities
  • Intelligent HCI

Cite this

@inproceedings{4e4d1836bcf7439d9d1be432436ac0fe,
title = "Human-Computer Interaction Task Classification via Visual-Based Input Modalities",
abstract = "Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52{\%} when utilising facial-based features as an input feature vector, and an average accuracy of 49.65{\%} when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63{\%}.",
keywords = "Task Classification, Visual-Based Input Modalities, Intelligent HCI",
author = "A Samara and L Galway and R Bond and H Wang",
year = "2017",
month = "6",
day = "15",
language = "English",
booktitle = "Unknown Host Publication",

}

Samara, A, Galway, L, Bond, R & Wang, H 2017, Human-Computer Interaction Task Classification via Visual-Based Input Modalities. in Unknown Host Publication. 11th International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI) 2017, 15/06/17.

Human-Computer Interaction Task Classification via Visual-Based Input Modalities. / Samara, A; Galway, L; Bond, R; Wang, H.

Unknown Host Publication. 2017.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Human-Computer Interaction Task Classification via Visual-Based Input Modalities

AU - Samara, A

AU - Galway, L

AU - Bond, R

AU - Wang, H

PY - 2017/6/15

Y1 - 2017/6/15

N2 - Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.

AB - Enhancing computers with the facility to perceive and recognise the user feelings and abilities, as well as aspects related to the task, becomes a key element for the creation of Intelligent Human-Computer Interaction. Many studies have focused on predicting users’ cognitive and affective states and other human factors, such as usability and user experience, to achieve high quality interaction. However, there is a need for another approach that will empower computers to perceive more about the task that is being conducted by the users. This paper presents a study that explores user-driven task-based classification, whereby the classification algorithm used features from visual-based input modalities, i.e. facial expression via webcam, and eye gaze behaviour via eye-tracker. Within the experiments presented herein, the dataset employed by the model comprises four different computer-based tasks. Correspondingly, using a Support Vector Machine-based classifier, the average classification accuracy achieved across 42 subjects is 85.52% when utilising facial-based features as an input feature vector, and an average accuracy of 49.65% when using eye gaze-based features. Furthermore, using a combination of both types of features achieved an average classification accuracy of 87.63%.

KW - Task Classification

KW - Visual-Based Input Modalities

KW - Intelligent HCI

M3 - Conference contribution

BT - Unknown Host Publication

ER -