Innovative Method for Unsupervised Voice Activity Detection and Classification of Audio Segments

Zulfiqar Ali, Muhammad Talha

Research output: Contribution to journalArticle

11 Citations (Scopus)
71 Downloads (Pure)

Abstract

An accurate and noise-robust voice activity detection (VAD) system can be widely used for emerging speech technologies in the fields of audio forensics, wireless communication, and speech recognition. However, in real-life application, the sufficient amount of data or human-annotated data to train such a system may not be available. Therefore, a supervised system for VAD cannot be used in such situations. In this paper, an unsupervised method for VAD is proposed to label the segments of speech-presence and speech-absence in an audio. To make the proposed method efficient and computationally fast, it is implemented by using long-term features that are computed by using the Katz algorithm of fractal dimension estimation. Two databases of different languages are used to evaluate the performance of the proposed method. The first is Texas Instruments Massachusetts Institute of Technology (TIMIT) database, and the second is the King Saud University (KSU) Arabic speech database. The language of TIMIT is English, while the language of the KSU speech database is Arabic. TIMIT is recorded in only one environment, whereas the KSU speech database is recorded in distinct environments using various recording systems that contain sound cards of different qualities and models. The evaluation of the proposed method suggested that it labels voiced and unvoiced segments reliably in both clean and noisy audio.

Original languageEnglish
Pages (from-to)15494-15504
Number of pages11
JournalIEEE Access
Volume6
DOIs
Publication statusPublished - 13 Feb 2018

Keywords

  • fractal dimension
  • Katz algorithm
  • KSU speech database
  • TIMIT database
  • Voiced and unvoiced segmentation

Fingerprint Dive into the research topics of 'Innovative Method for Unsupervised Voice Activity Detection and Classification of Audio Segments'. Together they form a unique fingerprint.

Cite this