Classification of nonverbal human produced audio events: A pilot study

No Thumbnail Available

Date

2018-09-06

Journal Title

Journal ISSN

Volume Title

Publisher

International Speech Communication Association

Abstract

The accurate classification of nonverbal human producedaudio events opens the door to numerous applications beyondhealth monitoring. Voluntary events, such as tongue clickingand teeth chattering, may lead to a novel way of silent interfacecommand. Involuntary events, such as coughing and clearingthe throat, may advance the current state-of-the-art in hearinghealth research. The challenge of such applications is the bal-ance between the processing capabilities of a small intra-auraldevice and the accuracy of classification. In this pilot study,10 nonverbal audio events are captured inside the ear canalblocked by an intra-aural device. The performance of three clas-sifiers is investigated: Gaussian Mixture Model (GMM), Sup-port Vector Machine and Multi-Layer Perceptron. Each classi-fier is trained using three different feature vector structures con-structed using the mel-frequency cepstral (MFCC) coefficientsand their derivatives. Fusion of the MFCCs with the auditory-inspired amplitude modulation features (AAMF) is also investi-gated. Classification is compared between binaural and monau-ral training sets as well as for noisy and clean conditions. Thehighest accuracy is achieved at 75.45% using the GMM classi-fier with the binaural MFCC+AAMF clean training set. Accu-racy of 73.47% is achieved by training and testing the classifierwith the binaural clean and noisy dataset.

Description

Keywords

Nonverbal, Classification, Hearing protection, Biosignals

Citation