Multi-Task Learning for Acoustic Event Detection Using Event and Frame Position Information

Xianjun Xia, Roberto Togneri, Ferdous Sohel, Yuanjun Zhao, Defeng Huang

Research output: Contribution to journalArticle

Abstract

Acoustic event detection deals with the acoustic signals to determine the sound type and to estimate the audio event boundaries. Multi-label classification based approaches are commonly used to detect the frame wise event types with a median filter applied to determine the happening acoustic events. However, the multi-label classifiers are trained only on the acoustic event types ignoring the frame position within the audio events. To deal with this, this paper proposes to construct a joint learning based multi-task system. The first task performs the acoustic event type detection and the second task is to predict the frame position information. By sharing representations between the two tasks, we can enable the acoustic models to generalize better than the original classifier by averaging respective noise patterns to be implicitly regularized. Experimental results on the monophonic UPC-TALP and the polyphonic TUT Sound Event datasets demonstrate the superior performance of the joint learning method by achieving lower error rate and higher F-score compared to the baseline AED system.

Original languageEnglish
Article number8788613
Pages (from-to)569-578
Number of pages10
JournalIEEE Transactions on Multimedia
Volume22
Issue number3
DOIs
Publication statusPublished - 1 Mar 2020

Fingerprint Dive into the research topics of 'Multi-Task Learning for Acoustic Event Detection Using Event and Frame Position Information'. Together they form a unique fingerprint.

  • Cite this