An Approach for Self Training Audio Event Detectors using Web Data
Published in 25th European Signal Processing Conference (EUSIPCO), 2017
Audio Event Detection (AED) aims to recognize sounds within audio and video recordings. AED employs machine learning algorithms commonly trained and tested on annotated datasets. However, available datasets are limited in number of samples and hence it is difficult to model acoustic diversity. Therefore, we propose combining labeled audio from a dataset and unlabeled audio from the web to improve the sound models. The audio event detectors are trained on the labeled audio and ran on the unlabeled audio downloaded from YouTube. Whenever the detectors recognized any of the known sounds with high confidence, the unlabeled audio was used to re-train the detectors. The performance of the re-trained detectors is compared to the one from the original detectors using the annotated test set. Results showed an improvement of the AED, and uncovered challenges of using web audio from videos.
Recommended citation: @inproceedings{shah2017approach, title={An Approach for Self-Training Audio Event Detectors Using Web Data}, author={Shah, Ankit and Badlani, Rohan and Kumar, Anurag and Elizalde, Benjamin and Raj, Bhiksha}, booktitle={25th European Signal Processing Conference (EUSIPCO)}, pages={1863--1867}, year={2017}, organization={IEEE} }
Download Paper