An Automatic Digital Audio Authentication/Forensics System

Zulfiqar Ali, Muhammad Imran, Mansour Alsulaiman

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2% ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100% by using the proposed system.

LanguageEnglish
Pages2994-3007
Number of pages14
JournalIEEE Access
Volume5
DOIs
Publication statusPublished - 24 Feb 2017

Fingerprint

Authentication
Audition
Audio systems
Microphones
Acoustic waves

Keywords

  • audio forensics
  • Digital audio authentication
  • forgery
  • human psychoacoustic principles
  • machine learning algorithm

Cite this

Ali, Zulfiqar ; Imran, Muhammad ; Alsulaiman, Mansour. / An Automatic Digital Audio Authentication/Forensics System. In: IEEE Access. 2017 ; Vol. 5. pp. 2994-3007.
@article{c303b7cd62f54f54ab046a45ce51a919,
title = "An Automatic Digital Audio Authentication/Forensics System",
abstract = "With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2{\%} ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100{\%} by using the proposed system.",
keywords = "audio forensics, Digital audio authentication, forgery, human psychoacoustic principles, machine learning algorithm",
author = "Zulfiqar Ali and Muhammad Imran and Mansour Alsulaiman",
year = "2017",
month = "2",
day = "24",
doi = "10.1109/ACCESS.2017.2672681",
language = "English",
volume = "5",
pages = "2994--3007",
journal = "IEEE Access",
issn = "2169-3536",

}

An Automatic Digital Audio Authentication/Forensics System. / Ali, Zulfiqar; Imran, Muhammad; Alsulaiman, Mansour.

In: IEEE Access, Vol. 5, 24.02.2017, p. 2994-3007.

Research output: Contribution to journalArticle

TY - JOUR

T1 - An Automatic Digital Audio Authentication/Forensics System

AU - Ali, Zulfiqar

AU - Imran, Muhammad

AU - Alsulaiman, Mansour

PY - 2017/2/24

Y1 - 2017/2/24

N2 - With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2% ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100% by using the proposed system.

AB - With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2% ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100% by using the proposed system.

KW - audio forensics

KW - Digital audio authentication

KW - forgery

KW - human psychoacoustic principles

KW - machine learning algorithm

UR - http://www.scopus.com/inward/record.url?scp=85017584482&partnerID=8YFLogxK

U2 - 10.1109/ACCESS.2017.2672681

DO - 10.1109/ACCESS.2017.2672681

M3 - Article

VL - 5

SP - 2994

EP - 3007

JO - IEEE Access

T2 - IEEE Access

JF - IEEE Access

SN - 2169-3536

ER -