Abstract
In natural course, human beings usually make use of multisensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multimodal human-computer interface (HCI) by combining an eyetracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character.
Original language | English |
---|---|
Pages (from-to) | 16-22 |
Journal | International Journal of Computer Applications |
Volume | 130 |
Issue number | 16 |
DOIs | |
Publication status | Published (in print/issue) - 1 Nov 2015 |
Keywords
- Multi-modal
- eye-tracker
- HCI
- eye typing
- eye-gaze.