Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry

Jai Vardhan Singh, Girijesh Prasad

    Research output: Contribution to journalArticlepeer-review

    191 Downloads (Pure)

    Abstract

    In natural course, human beings usually make use of multisensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multimodal human-computer interface (HCI) by combining an eyetracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character.
    Original languageEnglish
    Pages (from-to)16-22
    JournalInternational Journal of Computer Applications
    Volume130
    Issue number16
    DOIs
    Publication statusPublished (in print/issue) - 1 Nov 2015

    Keywords

    • Multi-modal
    • eye-tracker
    • HCI
    • eye typing
    • eye-gaze.

    Fingerprint

    Dive into the research topics of 'Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry'. Together they form a unique fingerprint.

    Cite this